title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Proving convergence of a sequence for all integers
Hint: use the fact that $|f(x)-f(y)|=|f'(c)(x-y)|\le a|x-y|$ to prove that your sequence is Cauchy.
y = x^25 -- How to solve for x.
Yes, that is correct. Note that then we would have $x^{25} = \left( \sqrt[25]{y} \right)^{25} = y$ as expected.
Pointwise convergence of $f_n(x) = nx\int_n^{nx+1}y^{-2}e^{n/y}dy$
Use the substitution $u = e^{n/y}$ then $du = ny^{-2} e^{n/y} dy$ hence $f_n(x) = nx \int_e^{e^{n/ (nx+1)}} y^{-2}(-1)y^2\dfrac{1}{n}du=-x \int_e^{e^{n/ (nx+1)}} u du $ , and this is easy to compute.
Calculating a free resolution of $\mathbb Q[x,y,z]/I$ where $I = (x,y,z)$
Although in the comments the essential hint has been given, to use the Koszul-complex, I give the calculation with Macaulay2 below. As you see, it is simply the command 'res (R^1/id1)' in line i4. Note that 'R/id1' instead of 'R^1/id1' would give an error, as the argument of res must be a module, not a ring. With the commands C.dd_(number) you get the maps between the modules in the complex. The generators that you are looking for are in C.dd_2 i1 : R=QQ[x,y,z] o1 = R o1 : PolynomialRing i2 : id1 = ideal gens R o2 = ideal (x, y, z) o2 : Ideal of R i4 : res (R^1/id1) 1 3 3 1 o4 = R <-- R <-- R <-- R <-- 0 0 1 2 3 4 o4 : ChainComplex i5 : C= oo 1 3 3 1 o5 = R <-- R <-- R <-- R <-- 0 0 1 2 3 4 o5 : ChainComplex i6 : C.dd_1 o6 = | x y z | 1 3 o6 : Matrix R <--- R i7 : C.dd_0 o7 = 0 1 o7 : Matrix 0 <--- R i8 : C.dd_2 o8 = {1} | -y -z 0 | {1} | x 0 -z | {1} | 0 x y | 3 3 o8 : Matrix R <--- R i9 : C.dd_3 o9 = {2} | z | {2} | -y | {2} | x | 3 1 o9 : Matrix R <--- R
What is the Grossencharacter of this CM curve?
First of all, the curve $E = \mathbf{C}/\mathcal{O}$ is only defined over $K$ if $K$ has class number one. So, unless you make this assumption, the Grossencharacter will only be defined over $H$, the Hilbert class field of $K$. Also, there are various ways to think about what a Grossencharacter is, I assume you are interested in a map $$\chi: K^{\times} \backslash \mathbb{A}^{\times}_K \rightarrow \mathbf{C}^{\times}.$$ Suppose that $K$ is an imaginary quadratic field of class number one, and suppose for convenience that the number of roots of unity $w_K = 2$, so $\Delta_K \ne -3, -4$. This actually implies that $\Delta_K = - p$ for some prime $p \in \{7,11,19,43,67,163\}$ which are all $3 \pmod 4$. In particular, $-1$ is not a square modulo $p$. $\chi$ is certainly determined by its restriction to $K^{\times}_v$ for all $v$. We will have: $$\chi_{\infty}: \mathbf{C}^{\times} \rightarrow \mathbf{C}^{\times}$$ is given by $z \mapsto z$. On the other hand, suppose that $v$ is a finite prime different from $p$. Then $$\chi_{v}: K^{\times}_v \rightarrow \mathbf{C}^{\times}$$ is trivial on $\mathcal{O}^{\times}_v$, and thus is determined by the image of $\pi_v$. Since $K$ has class number one, there is a global $\pi \in \mathcal{O}_K$ such that $(\pi)$ is the prime above $v$. This is unique up to units, so unique up to $\pm 1$. Let $\chi_v(\pi_v)$ denote the choice of $\pm \pi^{-1}$ which is a quadratic residue modulo $\sqrt{p}$. Finally, for $v = p$, we have $K_v = \mathbf{Q}_p(\sqrt{-p})$. Now let $\chi_p(\sqrt{-p}) = 1/\sqrt{-p}$. To finish the description of $\chi_p$, we need to define its values on $\mathcal{O}^{\times}_v$. Unlike for other primes, it is no longer trivial. Instead, let $\chi_p$ on this space be the quadratic character map $$\chi_p: a \in \mathcal{O}^{\times}_v \mapsto \left(\frac{a}{p}\right) \in \pm 1 \subset \mathbf{C}^{\times}.$$ This completes the description of $\chi$. However, we may also use thi description to define $\chi$. We certainly get a character of $\mathbb{A}^{\times}_K \rightarrow \mathbf{C}^{\times}$, with the property that on the finite ideles it has conductor $\sqrt{p}$. To show that this is a map on $K^{\times}\backslash \mathbb{A}^{\times}_K$, it suffices to show that $\chi$ vanishes on $K^{\times}$. It certainly vanishes on $\sqrt{-p}$, since, if $\alpha = (\sqrt{-p},\sqrt{-p},\ldots )$, then $$\chi_v(\sqrt{-p}) = 1, v \ne p,\infty, \ \chi_p(\sqrt{-p}) = 1/\sqrt{-p}, \ \chi_{\infty}(\sqrt{-p}) = \sqrt{-p},$$ and the product of these numbers is trivial. Anything in $K^{\times}$ can be written as a power of $\sqrt{-p}$ times some $x \in K^{\times}$ which has trivial valuation at $p$. By unique factorization, write $$x = \epsilon \prod \pi_i,$$ where $\pi_i$ are chosen (unqiuely) to be squares modulo $p$, and $\epsilon = \pm 1$. Note that $$\left(\frac{x}{p}\right) = \epsilon.$$ If we evaluate $\chi$ on $x$, we find that: The contribution from $\chi_{\infty}$ is $x$, The contribution from all primes away from $p$ is $\prod (\pi_i)^{-1} = \epsilon/x$, The contribution from $p$ is $\chi_p(x) = \displaystyle{\left(\frac{x}{p}\right) = \epsilon}$. The product of all these terms is $1$, so we do get a Hecke character. The other Hecke character $\overline{\chi}$ is the conjugate of this one. As an application, we compute $a_{\ell}$ for the corresponding elliptic curve for primes $\ell$ which split in $K$. We need to compute $\chi(\mathrm{Frob}_{\pi}) + \overline{\chi}(\mathrm{Frob}_{\pi})$. Using the required choice of Frobenius (geometric versus arithmetic) and the appropriate normalization of local class field theory, one sees that the answer is $\pi + \overline{\pi}$, where $(\pi/p) = + 1$. In otherwords, write $\ell = a^2 + p b^2$ with $(a/p) = + 1$, then $a_p = 2a$.
how to calculate the phase angle
Let's take an arbitrary equation $$ A\sin(x) + B\cos(x) = R\sin(x + \phi) \quad (1) $$ We can write $\sin(x + \phi)$ as $$ \sin(x + \phi) = \sin(x)\cos(\phi) + \sin(\phi)\cos(x)\qquad (2) $$ Therefore, $R\sin(x + \phi) = R\sin(x)\cos(\phi) + R\sin(\phi)\cos(x)$. Let's equate equations (1) and (2). $$ A\sin(x) + B\cos(x) = R\sin(x)\cos(\phi) + R\sin(\phi)\cos(x) $$ So $A = R\cos(\phi)$ and $B = R\sin(\phi)$. Then $A^2 + B^2 = R^2(\cos^2(\phi) + \sin^2(\phi)) = R^2$; that is, $R = \sqrt{A^2+B^2}$. Then $$ \tan(\phi) = \frac{B\sin(\phi)}{A\cos(\phi)} =\frac{B}{A}\\ \phi = \tan^{-1}\Big(\frac{B}{A}\Big) $$ where if $A,B > 0$ we are in quadrant 1, $A>0$ and $B<0$ we are in quad 4, $A<0$ and $B>0$ we are in quad 2, and quad 3 for $A,B<0$. We can then do this for $R\cos(x-\phi)$ as well where $$ R\cos(x-\phi) = R\cos(x)\cos(\phi) + R\sin(x)\sin(\phi) $$ Why dont you give it a try.
Show $\displaystyle \lim_{x \to 0} \Bigl(\ln \sum_{k=0}^r x^k \Bigr)\biggl/\Bigl( \sum_{k=1}^{\infty}\frac{x^k}{k!}\Bigr) = 1.$
Hint: Use L'Hospital's Rule. It works quite quickly.
Sets of integers that can be placed in a magic square
Yes, there is a formula for any $(n+1)\times (n+1)$ magic square, which can be deduced with a bit of hard work and clever moves: Lets start Because I do not have acces to editing tools, I will say thet cell $(a,b)$ is the cell on the $a^{th}$ line and $b^{tn}$ column. Let us fill $(i,j)$ with $x_{(a-1)n+b}$, $\forall i,j$ with $1\leq i,j \leq n$ and $(n+1,n+1)=x$ Observe that the "magic sum" is $$\sum_{i=0}^{n-1}x_{in+i+1}+x=S$$ Rmember this property as $(*)$. because those are the numbers on the big diagonal which starts at $(1,1)$ and ends at $(n+1,n+1)$ We can fill all the remaining cells accordingly. We get that $\forall k$, $1\leq k\leq n$ $$(k,n+1)=S-\sum_{i=1}^{n}x_{(k-1)n+i}$$ and $$(n+1,k)=S-\sum_{i=0}^{n-1}x_{in+k}$$ ( Now, if you haven't done it already, draw this square. Maybe try for small cases like $n=4$ or $5$ first. ) For the square to be magical, we must check the property for the $(n+1)^{th}$ row, the $(n+1)^{th}$ column and the big diagonal that starts at $(1,n+1)$ and ends at $(n+1,1)$ In other words, the following must happen: For the $(n+1)^{th}$ column: $$S=\sum_{k=1}^{n}(k,n+1)+x=\sum_{k=1}^{n}\bigg(S-\sum_{i=0}^{n-1}x_{in+k}\bigg)+x=n\cdot S-\sum_{i=1}^{n^2}x_i+x$$ For the $(n+1)^{th}$ row: $$S=\sum_{k=1}^{n}(n+1,k)+x=\sum_{k=1}^{n}\bigg(S-\sum_{i=1}^{n}x_{(k-1)n+i}\bigg)+x=n\cdot S-\sum_{i=1}^{n^2}x_i+x$$ For the second big diagonal: $$S=\sum_{i=1}^{n+1}(i,n+2-1)=\bigg(S-\sum_{i=1}^{n}x_i\bigg)+\bigg(S-\sum_{i=0}^{n-1}x_{in+1}\bigg)+\sum_{i=2}^{n}x_{in-(i-2)}$$ The conditions for the $(n+1)^{th}$ column and row are the same. Using $(*)$, we get $$x=\frac{\sum_{i=1}^{n^2}x_i-(n-1)\sum_{i=0}^{n-1}x_{in+i+1}}{n}$$ So this is the first condition. As an example, using the formula given in the comments for the $3\times 3$ square (so $n=2$), this is equivalent to: $$c+b=\frac{\big(c-b+c+(a+b)+c-(a-b)+c\big)-\big(c-b+c\big)}{2}=\frac{2c+2b}{2}$$ which is true. Now lets see the condition for the other big diagonal. We have $$S=\sum_{i=1}^{n+1}(i,n+2-1)=\bigg(S-\sum_{i=1}^{n}x_i\bigg)+\bigg(S-\sum_{i=0}^{n-1}x_{in+1}\bigg)+\sum_{i=2}^{n}x_{in-(i-2)}$$ so by using $(*)$ and reducing, we get $$x=\sum_{i=1}^{n}x_i+\sum_{i=0}^{n-1}x_{in+1}-\sum_{i=2}^{n}x_{in-(i-2)}-\sum_{i=0}^{n-1}x_{in+i+1}$$ 9again, this can be checked using the formula for $3\times 3$) To conclude The only condition (sufficient and necessary) for a square to be perfect is: (note, I got the final result by using the 2 equalities for $x$ and by reducing some terms) Consider an $(n+1)\times(n+1)$ square. Consider $(a,b)$ the cell situated on the $a^{th}$ line and $b^{tn}$ column. Let us fill $(i,j)$ with $x_{(a-1)n+b}$, $\forall i,j$ with $1\leq i,j \leq n$. Then, we must have: $$\sum_{i=1}^{n^2}x_i+\sum_{i=0}^{n-1}x_{in+i+1}=n\cdot\bigg(\sum_{i=1}^{n}x_i+\sum_{i=0}^{n-1}x_{in+1}-\sum_{i=2}^{n}x_{in-(i-2)}\bigg)$$ So to answer your questions: Given a set of $(n+1)^2$ integers, we can form an $(n+1)\times (n+1)$ perfect square with them if and only if there exist $x_1,x_2,...,x_{n^2}$ such that $$\sum_{i=1}^{n^2}x_i+\sum_{i=0}^{n-1}x_{in+i+1}=n\cdot\bigg(\sum_{i=1}^{n}x_i+\sum_{i=0}^{n-1}x_{in+1}-\sum_{i=2}^{n}x_{in-(i-2)}\bigg)$$ Finishing touches To get the actual formula for every single damn cell, just use whichever formula for $(n+1,n+1)=x$ you want, and then the formulas for $(n+1,k$ and $k,n+1)$ P.S. I am sorry, but the formulas cannot get nicer than this :(
residually finite semidirect product
Since $N$ is finitely generated and $K$ is finite, there are only finitely many group homomorphisms $N \to K$. So the intersection $L$ of their kernels has finite index in $N$, and it is characteristic in $N$ and hence normal in $G$. Then $LQ$ has finite index in $G$ and does not contain $N$.
Find the probability of placing $5$ dots on an $8 \times 8$ grid.
${64 \choose 5}$ looks OK for the denominator For distinct rows, there are ${8 \choose 5}$ ways of choosing the five rows. The dot in the first occupied row can be in any of the $8$ columns The dot in the second occupied row can be in any of the $7$ remaining columns The dot in the third occupied row can be in any of the $6$ remaining columns The dot in the fourth occupied row can be in any of the $5$ remaining columns The dot in the fifth occupied row can be in any of the $4$ remaining columns So I would get a probability of $$\dfrac{{8 \choose 5}\cdot 8\cdot 7\cdot 6\cdot 5\cdot 4}{{64 \choose 5}}$$
If $a_{1}=1$ and $a_{n+1}=1+\frac{n}{a_{n}}$, then $a_{n}=\sqrt{n}+\frac{1}{2}-\frac{1}{8\sqrt{n}}+o\left(\frac{1}{\sqrt{n}}\right)$
I will present two approaches to this question. The first is less advanced and easier but also yields a less optimal result. First Method Consider the auxiliary sequence $b_n =\sqrt{n+\frac{1}{4}}+\frac{1}{2}$. This is the unique positive solution to the $P_n(X)=0$ where $P_n(X)=X^2-X-n$. ${\bf Lemma~1.}$ For every $n\geq1$, we have $b_{n+1}\geq 1+\dfrac{n}{b_{n-1}}$. Proof. Indeed, let $u=1+\frac{n}{b_{n-1}}$ then $$u^2-u=\frac{(n+b_{n-1})n}{b_{n-1}^2}=\frac{(n+b_{n-1})n}{b_{n-1}+n-1}= n+\frac{n}{b_{n-1}+n-1}\leq n+1$$ since $b_{n-1}\geq 1$. This shows that $P_{n+1}(u)\leq0$ and consequently $u\leq b_{n+1}$ as desired.$\qquad \square$ ${\bf Lemma~2.}$ For every $n\geq1$, we have $b_{n-1}\leq a_n \leq b_n$. Proof. This is now an easy induction. Clearly true for $n=1$, and if it is true for some $n$ then $$ b_n=1+\frac{n}{b_n}\leq 1+\frac{n}{a_n}\leq 1+\frac{n}{b_{n-1}}\leq b_{n+1}. $$ and the result follows.$\qquad \square$ It follows that $$ \sqrt{n-\frac{3}{4}}-\sqrt{n}\leq a_n-\sqrt{n}-\frac{1}{2}\leq \sqrt{n+\frac{1}{4}}-\sqrt{n} $$ Thus $$\sqrt{n}\left\vert a_n-\sqrt{n}-\frac{1}{2}\right\vert\leq \frac{3}{4(1+\sqrt{1-3/(4n)})}\leq \frac{1}{2}$$ So, we have proved that, for every $n\geq 1$ we have $$ a_n= \sqrt{n}+\frac{1}{2}+{\cal O} \left(\frac{1}{\sqrt{n}}\right) $$ This is not the desired expansion but it has the merit to be easy to prove. Second Method The general reference in this part is the book of D. Knuth. "The art of Computer programming, Vol III, second edition, pp.63--65". Let $I(n)$ be the number of involutions in the symmetric group $S_n$, ($i.e.$ $\sigma\in S_n$ such that $\sigma^2=I$). It is well-known that $I(n)$ can be calculated inductively by $$ I(0)=I(1)=1,\qquad I(n+1)=I(n)+nI(n-1) $$ This shows that our sequence $\{a_n\}$ is related to these numbers by the formula $$a_n=\frac{I(n+1)}{I(n)}.$$ So, we can use what we know about these numbers, In particular, the following asymptotic expansion, from Knuth's book: $$ I(n+1)=\frac{1}{\sqrt{2}} n^{n/2}e^{-n/2+\sqrt{n}-1/4}\left(1+\frac{7}{24\sqrt{n}}+{\cal O}\left(\frac{1}{n^{3/4}}\right)\right). $$ Now, it is a "simple" matter to conclude from this that $a_n$ has the desired asymptotic expansion. Edit: In fact the term $\mathcal{O}(n^{-3/4})$ effectively destroys the asymptotic expansion as mercio noted, But in fact we have $$ I(n+1)=\frac{1}{\sqrt{2}} n^{n/2}e^{-n/2+\sqrt{n}-1/4}\left(1+\frac{7}{24\sqrt{n}}+{\cal O}\left(\frac{1}{n}\right)\right). $$ This is shown by WIMP AND ZEILBERGER in their paper ``Resurrecting the Asymptotics of Linear Recurrences'', that can be found here.
Using the Total Probability Rule
$P(A_2)P(2W|A_2)+P(A_3)P(2W|A_3)+P(A_4)P(2W|A_4)$ $=\displaystyle\frac{\binom{6}{2}\binom{4}{2}}{\binom{10}{4}}\cdot\frac{1}{\binom{4}{2}}+\frac{\binom{6}{3}\binom{4}{1}}{\binom{10}{4}}\cdot\frac{\binom{4}{2}-\binom{3}{1}}{\binom{4}{2}}+\frac{\binom{6}{4}}{\binom{10}{4}}\cdot1=\frac{15+20\cdot4\cdot3+15}{\binom{10}{4}}=\frac{1}{3}$. As Henry pointed out above, you can also think of this as drawing 2 balls randomly from the first box, which gives $\frac{\binom{6}{2}}{\binom{10}{2}}=\frac{15}{45}=\frac{1}{3}.$
Why does $\lim_{k\to\infty} \sqrt[k] {\big | \frac{k^{1/k}-1}{2^k}\big |} = 1/2$?
Let $$ a_k=\frac{k^{\frac1k}-1}{2^k}$$ thus $$\frac{a_{k+1}}{a_k}=\frac{(k+1)^{\frac1{k+1}}-1}{2^{k+1}}\frac{2^k}{k^{\frac1k}-1}=\frac12\frac{(k+1)^{\frac1{k+1}}-1}{k^{\frac1k}-1}\to \frac12\cdot 1=\frac12$$ indeed $$k^{\frac1k}-1=e^{\frac{\ln k}{k}}-1=\frac{\ln k}{k}+o\left(\frac{\ln k}{k}\right)$$ $${(k+1)}^{\frac1{k+1}}-1=e^{\frac{\ln (k+1)}{k+1}}-1=\frac{\ln (k+1)}{k+1}+o\left(\frac{\ln k}{k}\right)$$ $$\frac{(k+1)^{\frac1{k+1}}-1}{k^{\frac1k}-1}=\frac{\frac{\ln (k+1)}{k+1}+o\left(\frac{\ln k}{k}\right)}{\frac{\ln k}{k}+o\left(\frac{\ln k}{k}\right)}=\frac{\frac{k}{\ln k}\frac{\ln (k+1)}{k+1}+o\left(1\right)}{1+o\left(1\right)}\to 1$$ therefore $$\frac{a_{k+1}}{a_k}\to \frac12 \implies \sqrt[k] {a_k}\to \frac12$$
MLE for endpoints of the range of random variable.
One important thing that you are probably missing is that $f(x;\alpha,\beta)=0$ when $x\notin [\alpha,\alpha+\beta]$. So, for $L$ to take positive value for your selection of $\alpha,\beta$, you need to have that $\alpha\leq X_{\min}\leq X_{\max}\leq \alpha+\beta$. I'm pretty sure you then saw that $L$ strictly decreasing in $\beta$. This, together with the restriction that $\beta\geq X_{\max}-\alpha$ should make you conclude that optimally $\beta=X_{\max}-\alpha$ (rather than $\beta=X_{\max}-X_{\min}$ which you mention). You then need to substitute this value into your expression for $L$ and then proceed to maximise the resulting expression with respect to $\alpha$.
How to explain light reduction in Helmert demonstration (Chi square distribution)?
This equality can be explained using Taylor serie for $\sqrt[m]{1+x}$ if -1 < x < 1 then $$ \sqrt[m]{1 + x} = 1 + \frac{1}{m}x - \frac{(1+m)}{2!\space m^2}x^2 + \frac{(1+m)(1+2m)}{3!\space m^3}x^3\space\space\space\space(1) $$ $$ \sqrt[m]{1 - x} = 1 - \frac{1}{m}x - \frac{(1+m)}{2!\space m^2}x^2 - \frac{(1+m)(1+2m)}{3!\space m^3}x^3\space\space\space\space(2) $$ The first expression $$ \sqrt[m]{x+δ_m/2}-\sqrt[m]{x-δ_m/2} $$ can also be written as $$ \sqrt[m]{x+x.δ_m/2x}-\sqrt[m]{x-x.δ_m/2x} $$ putting $x$ in evidence we obtain $$ \sqrt[m]{x(1+δ_m/2x)}-\sqrt[m]{x(1-δ_m/2x)} $$ extracting x from $\sqrt[m]{...}$ we obtain $$ \sqrt[m]{x}.\big(\sqrt[m]{1+δ_m/2x}-\sqrt[m]{1-δ_m/2x}\big) $$ now considering Taylor serie (1) and (2) and neglecting term greater than $x^2$ because they are insignicant, we obtain $$ \sqrt[m]{x}\space\big((1+\frac{δ_m}{2mx}-\frac{(1+m)}{2!\space m^2}x^2)-(1-\frac{δ_m}{2mx}-\frac{(1+m)}{2!\space m^2}x^2)\big) $$ in resolving expression we obtain $$ \sqrt[m]{x}\space\big(2.\frac{δ_m}{2mx}\big) $$ that is equal to $$ \frac{\sqrt[m]{x}}{mx}δ_m $$ The tip used by Helmert is to use Taylor series for $\sqrt[m]{1+x}$ and $\sqrt[m]{1-x}$ and to cut terms greater that $x^2$ because they are insignificant.
Probability density function (for which $a,b$)
Your function is a probability density function if it is positive and its integral over $\mathbb{R}$ is $1$. Positivity: $f$ is positive iff : $$ \forall x \in \mathbb{R}, (\dfrac{x-2}{4}) \mathbb{1}_{(a, b)} (x) \geq 0 $$ $\Leftrightarrow$ $$ \forall x \in (a, b), (\dfrac{x-2}{4}) \mathbb{1}_{(a, b)} (x) \geq 0 $$ $\Leftrightarrow$ $$ \forall x \in (a, b), \dfrac{x-2}{4} \geq 0 $$ $\Leftrightarrow$ $$ \dfrac{a-2}{4}\geq 0 \textrm{ and } \dfrac{b-2}{4}\geq 0 $$ $\Leftrightarrow$ $$ a, b \geq 2 $$ Integral is $1$ $$ \int_{\mathbb{R}}f = \int_{a}^b \dfrac{x-2}{4} dx = \dfrac{1}{16} (b - a) (b+a - 4) $$ So we must have $(a, b)$ belong to some hyperbola. See here for the definition of hyperbola.
Hilbert spaces - equivalent norm
Consider for example that on $\mathbb R^m$ all the norm are equivalent, and so $\ldots$
Characterizing a circle.
As far as I know, given a topological metric space $(X,d)$ and a point $a\in X$ and $r\in \Bbb R^+$, a circle $(C,a,r)$ is defined as to be $\{b\in X\mid d(a,b)=r\}$. Since for any real number $r>0$ we have $x^2+y^2=r\iff \sqrt{x^2+y^2}=\sqrt{r}$, it turns out the set you're defining is just a circle in the Euclidean metric about the origin. Of course the set for a circle about the point $(h,k)$ is just $\{(x,y)\mid d((x,y),(h,k))=\sqrt{(x-h)^2+(y-k)^2}=\sqrt{r}\}$. There are other metrics on $\Bbb R^2$, though. I would invite you to consider what the unit circles for the following metrics would look like: $d((x,y),(w,z))=|x-w|+|y-z|$ $d((x,y),(w,z))=\max(|x-w|,|y-z|)$ There are probably other characterizations of circles, but I can't think of any less complicated than the one using a distance function at the moment. (Added after new edits) For the metric space $(\mathbb{R}^2 , d)$ where $d$ is the standard Euclidean metric, we define the circle centered at the point $(a,b) \in \mathbb{R}^2$ to be the set $S^1 = \{ (x,y) \in \mathbb{R}^2 \, | \, (x-a)^2 + (y-b)^2 = r^2 \}$, where we say that $r$ is the radius of the circle. Is this definition equivalent to saying a circle is a closed curve in $\mathbb{R}^2$ such that all points on the curve are equidistant from a single fixed point? In $\Bbb R^2$ they are virtually identical ("all points at a fixed distance from a fixed point") except that the one you are suggesting has "and the set is a closed curve" tacked on, which is a property of the set in question and doesn't really have to be part of its definition. In the Euclidean distance, $d(a,b)=r \iff d(a,b)^2=r^2$, so the fact that the square root is missing inside the set you mentioned is inessential.
What are some examples where the subspace topology is not the same as the topology defined directly on the subset?
Yes. There is a theorem in Munkres that states that for a convex subset of an ordered set, the order topology on the subset coincides with the subspace topology. (Convex means that if $a$, $b$ are in the subset, so is $[a,b]$.) The ordered square is not a convex subset of $\mathbb R^2$, so the two topologies don't have to coincide. A simpler example is the set $A=[0,1)\cup\{2\}\subset \mathbb R$. The order topology gives that $A$ is homeomorphic to $[0,1]$, whereas in the subspace topology $\{2\}$ is an open set.
Proofs by Induction: are my two proofs correct?
In a proof by mathematical induction, we wish to establish that some property $P(n)$ holds for each positive integer $n$ (or for each integer greater than some fixed integer $n_0$). We must first establish that the base case holds. Once we establish that it holds, we may assume the property holds for some positive integer $k$. We then need to prove that if $P(k)$ holds, then $P(k + 1)$ holds. Then, if our base case is $P(1)$, we obtain the chain of implications $$P(1) \implies P(2) \implies P(3) \implies \cdots$$ and $P(1)$, which establishes that the property holds for every positive integer. You should not assume $P(k + 1)$ is true. We must prove that $P(1)$ holds and that if $P(k)$ holds, then $P(k + 1)$ holds for each positive integer $k$. Let's look at the first proposition. Proof. Let $P(n)$ be the statement that $$1 + 3 + 6 + \cdots + \frac{n(n + 1)}{2} = \frac{n(n + 1)(n + 2)}{6}$$ Let $n = 1$. Then $$\frac{n(n + 1)}{2} = \frac{1(1 + 1)}{2} =\frac{1 \cdot 2}{2} = 1 = \frac{1 \cdot 2 \cdot 3}{6} = \frac{1(1 + 1)(1 + 2)}{6}$$ Hence, $P(1)$ holds. Since $P(1)$ holds, we may assume $P(k)$ holds for some positive integer $k$. Hence, $$1 + 3 + 6 + \cdots + \frac{k(k + 1)}{2} = \frac{k(k + 1)(k + 2)}{6}$$ This is our induction hypothesis. Let $n = k + 1$. Then \begin{align*} 1 + 3 + 6 + & \cdots + \frac{k(k + 1)}{2} + \frac{(k + 1)(k + 2)}{2}\\ & = \frac{k(k + 1)(k + 2)}{6} + \frac{(k + 1)(k + 2)}{2} && \text{by the induction hypothesis}\\ & = \frac{k(k + 1)(k + 2) + 3(k + 1)(k + 2)}{6}\\ & = \frac{(k + 1)(k + 2)(k + 3)}{6}\\ & = \frac{(k + 1)[(k + 1) + 1][(k + 1) + 2]}{6} \end{align*} Thus, $P(k) \implies P(k + 1)$ for each positive integer $k$. Since $P(1)$ holds and $P(k) \implies P(k + 1)$ for each positive integer $k$, $P(n)$ holds for each positive integer $n$.$\blacksquare$ I will leave the second proof to you.
How would you mathematically formulate this as an optimization problem?
I believe it can be formulated as a mixed integer linear program, albeit a somewhat cumbersome one. The model I have in mind draws in part of routing problems (using a binary variable for each combination of vehicle and arc to handle the routing) and in part from machine scheduling (using continuous variables to represent the time a vehicle enters and exits an arc, if it in fact uses the arc -- similar to start/end time variables for jobs on machines, to avoid having them overlap on the machine). The model I doodled has quite a few binary variables (and quite a few continuous variables, but they're relatively cheap) and (gulp) quite a few "big M" constraints (which are conceptually valid but can create performance issues).
Permutation (or Combination?) Word Problem
The number of ways of choosing 2 blue out of 4 is $\binom{4}{2}.$ After that you have 4 chips remaining which can be chosen from either 9 red or 6 yellow or both. So you are choosing 4 chips from $9 + 6 =15$ chips. Putting all together, you get $$\binom{4}{2}\binom{15}{4}.$$
One of the points of interval of convergence of the power series $\sum_{n=0}^\infty \left(\frac{x^8-1}{3}\right)^n$ is not a real number?
EDIT: Sorry, the previous answer was not correct. You have $$ \Bigg|\frac{x^8-1}{3}\Bigg| <1 \Leftrightarrow |{x^8}-1|<3 \Leftrightarrow |x|<4^{1/8} =\sqrt[4]{2}. $$ In these examples you typically want to work without getting rid of the absolute value, since the $x$ in your series may also be considered as complex number; and those have no relation of order, so you have to look at the modulus or absolute value in order to obtain the convergence radius, which will give you a disk in the complex case and an interval in the real case.
Riemannian metric on submanifold of $\mathbb{R}^n$
Abstractly, the metric on $\Sigma:=\partial U$ is defined by pullback along the inclusion $i:\Sigma\hookrightarrow \Bbb R^n$. This just means that if $u$ and $v$ are geometrically tangent to $\Sigma$ in $\Bbb R^n$, then $g(u,v):=u\cdot v$, the Euclidean dot product. For $p\in\Sigma$ we can take an orthonormal basis of $T_p\Bbb R^n$ that is $e_1,\dotsc,e_{n-1},n$, where $e_1,\dotsc,e_{n-1}$ are tangent to $\Sigma$ and $n$ is a choice of normal. Then $\nabla f=\sum_{i=1}^{n-1}(e_if)e_i+(\partial_nf)n$, so dotting this with itself and using the fact that this is an orthonormal basis gives your claim. In response to the comment, I'll elaborate. I now realize there's a conflict of notation so we'll rename the ambient space to $\Bbb R^d$. The notation $e_if$ is just the action of $e_i$ on the (smooth) function $f$ as a tangent vector. The notation $\partial_nf$ is just code for $nf$, as well. Now $\nabla f$ is the total gradient of $f$ on $\Bbb R^d$. In the PDE world, this means it's $(\partial_1f,\dotsc,\partial_df)$, where $\partial_i:=\partial/\partial x^i$. In differential geometric language, $\nabla f=\sum_{i=1}^d (\partial_if)\partial_i.$ (For other ambient manifolds, you need to worry about the metric here.) But in fact, if $v_1,\dotsc,v_d$ is any basis of $T_p\Bbb R^d$, then $\nabla f=\sum_{i=1}^d(v_if)v_i$. What I wrote above is this in the particular case that $v_1,\dotsc, v_{d-1}$ are an orthonormal basis of $T_p\Sigma$ and $n=v_d$ is a normal vector. Now $\nabla_\Sigma f$ is by definition the part of $\nabla f$ tangent to $\Sigma$. By above expansion, we indeed have $$\nabla_\Sigma f=\sum_{i=1}^{d-1}(e_if)e_i$$ if $e_1,\dotsc, e_{d-1}$ is an orthonormal basis of $T_p\Sigma$. In a coordinate basis of $\Sigma$, we have the formula $$\nabla_\Sigma f=g^{ij}\partial_jf\partial_iF,$$ where $F$ is the inclusion $\Sigma\hookrightarrow \Bbb R^d$.
Is every image of a loop in Hausdorff space, homeomorphic to $S^1$?
What does "loop" mean? If it does not require injectivity, then the result is obviously false with any constant loop $f:S^1\to X$ defined by $f(s)=x$ for all $s\in S^1$. If it does require injectivity, then it is a general result that a continuous injection from a compact space to a Hausdorff space must be a homeomorphism onto its image. However, either way, your argument is flawed. Given a continuous function $\alpha:X\to Z$ that descends along a quotient map $q:X\to Y$, the induced map $h:Y\to Z$ need not be a homeomorphism.
Proving eigenvalues for a matrix
Note that any eigenvalue $\lambda$ of $A$ satisfies $Av = \lambda v \tag{1}$ for some vector $v \ne 0$, so that $A^2 v = A(Av) = A(\lambda v) = \lambda(Av) = \lambda(\lambda v) = \lambda^2 v, \tag{2}$ whence $0 = (A^2 + I)v = A^2v + v = (\lambda ^2 + 1)v, \tag{3}$ and since $v \ne 0$ this implies $\lambda^2 + 1 = 0; \tag{4}$ but equation (4) has no real solutions, and thus the desired conclusion immediately follows. QED. Hope this helps. Cheerio, and as always, Fiat Lux!!!
How to solve this algebraic manipulation problem?
Start: $$ a-b = xy(x-y)+yz (y-z)+zx(z-x) $$ $$= xy(x-y)+yz (y-z)+zx(z-\color{red}y)+zx(\color{red}y-x) $$ $$= (xy-zx)(x-y)+(yz -zx)(y-z)$$ $$ =x(y-z)(x-y)+z(y-x)(y-z) $$ $$ = (x-y)(y-z)(x-z)$$ So $$ E = (a-b)\underbrace{(x^2+xy+y^2)(y^2+yz+z^2)(z^2+zx+x^2)}_{A}$$ Now you have to figer out $A$. (I bet it is $3ab$.)
Given that $\int_{0}^{100} (a^x-1) \:\mathrm{d}x = 30$, how can I calculate $a$?
Hints: $$\int a^x=\frac{a^x}{\log a} +C\implies$$ $$\int\limits_0^{100}(a^x-1)dx=\left.\left(\frac{a^x}{\log a}-x\right)\right|_0^{100}=\ldots$$
$\displaystyle \lim_{x \to 0}|x|^{\frac{1}{x^2}}$?
We have $\lim_{x\rightarrow 0}\frac{1}{x^2}=+\infty$ and $\lim_{x\rightarrow 0} \ln |x| = -\infty$ So the product $\lim_{x\rightarrow 0}\frac{\ln|x|}{x^2}=-\infty$ Hence the limit is 0.
Is this argument a valid use of inductive proof?
Yes, your argument does hold without proving a base case. Though you should make very clear that your argument is really just the inductive step; If there exist $W_k(x)$, $c_k$, $d_k$, $U_k(x)$ and $L_k(x)$ satisfying the given recurrence relations, and if the given inequalities hold for $k$, then they hold for $k+1$. This does not prove that any of the inequalities hold; it does not even prove that such sequences exist. It does now suffice to prove a base case, for example by exhibiting such $W_0(x)$, $c_0$, $d_0$, $U_0(x)$ and $L_0(x)$.
Does the existence of a mathematical object imply that it is possible to construct the object?
Really the answer to this question will come down to the way we define the terms "existence" (and "construct"). Going philosophical for a moment, one may argue that constructibility is a priori required for existence, and so ; this, broadly speaking, is part of the impetus for intuitionism and constructivism, and related to the impetus for (ultra)finitism.$^1$ Incidentally, at least to some degree we can produce formal systems which capture this point of view (although the philosophical stance should really be understood as preceding the formal systems which try to reflect them; I believe this was a point Brouwer and others made strenuously in the early history of intuitionism). A less philosophical take would be to interpret "existence" as simply "provable existence relative to some fixed theory" (say, ZFC, or ZFC + large cardinals). In this case it's clear what "exists" means, and the remaining weasel word is "construct." Computability theory can give us some results which may be relevant, depending on how we interpret this word: there are lots of objects we can define in a complicated way but provably have no "concrete" definitions: The halting problem is not computable. Kleene's $\mathcal{O}$ - or, the set of indices for computable well-orderings - is not hyperarithmetic. A much deeper example: while we know that for all Turing degrees ${\bf a}$ there is a degree strictly between ${\bf a}$ and ${\bf a'}$ which is c.e. in $\bf a$, we can also show that there is no "uniform" way to produce such a degree in a precise sense. Going further up the ladder, ideas from inner model theory and descriptive set theory become relevant. For example: We can show in ZFC that there is a (Hamel) basis for $\mathbb{R}$ as a vector space over $\mathbb{Q}$; however, we can also show that no such basis is "nicely definable," in various precise senses (and we get stronger results along these lines as we add large cardinal axioms to ZFC). For example, no such basis can be Borel. Other examples of the same flavor: a nontrivial ultrafilter on $\mathbb{N}$; a well-ordering of $\mathbb{R}$; a Vitali (or Bernstein or Luzin) set, or indeed any non-measurable set (or set without the property of Baire, or without the perfect set property); ... On the other side of the aisle, the theory ZFC + a measurable cardinal proves that there is a set of natural numbers which is not "constructible" in a precise set-theoretic sense (basically, can be built just from "definable transfinite recursion" starting with the emptyset). Now the connection between $L$-flavored constructibility and the informal notion of a mathematical construction is tenuous at best, but this does in my opinion say that a measurable cardinal yields a hard-to-construct set of naturals in a precise sense. $^1$I don't actually hold these stances except very rarely, and so I'm really not the best person to comment on their motivations; please take this sentence with a grain of salt.
Are these all the strings that the regular expression (a* + b*) . (a.b)* accepts or am I missing some?
Yes. $a^*+b^*$ gets you all strings that do not contain both $a$ and $b$, so you get the strings $a^n$ and $b^n$ for $n\ge 0$. $(ab)^*$ gets you $(ab)^n$ for all $n\ge 0$. Thus, $(a^*+b^*)(ab)^*$ gives you the strings $a^m(ab)^m$ and $b^n(ab)^m$ for $m,n\ge 0$. The easiest way to check a string $w$ is to start processing it from the back. Find the maximum word $u$ of the form $(ab)^n$ that is a final segment of $w$: $u=(ab)^n$ for some $n\ge 0$, and $w=xu$ for some $x$. If $x$ has the form $a^*$ or $b^*$, then $w$ matches $(a^*+b^*)(ab)^*$; if not, it doesn’t.
What is the expected distance between endpoints of $n$ line segments of length 1 connected at random angles?
Let each line segment be a unit vector $$ < cos(\theta) , sin(\theta) > $$ Adding up n unit vectors gives the resultant vector from the beginning to the end of the string $$ \sum_{k=1}^n < cos(\theta_k) , sin(\theta_k) > $$ The magnitude of the resultant vector is the distance from beginning to end $$ D = |< \sum_{k=1}^n cos(\theta_k) , \sum_{k=1}^n sin(\theta_k) >| $$ That equals $$ \sqrt{(\sum_{k=1}^n cos(\theta_k))^2 + (\sum_{k=1}^n sin(\theta_k))^2} $$ To find the average distance, we need to add up the distances from every possible combination of thetas and divide by the number of combinations. Since each theta can be any value between 0 and 2 pi, we use an integral $$ \frac{1}{(2\pi)^n} \int_{0}^{2\pi} \dots \int_{0}^{2\pi} \sqrt{(\sum_{k=1}^n cos(\theta_k))^2 + (\sum_{k=1}^n sin(\theta_k))^2} d\theta_1 \dots d\theta_n $$ Solve that and you get your average distance for any n
Notation to describe that a value is equivalent to at least one component of a vector?
Given a vector $v=\langle v_1,v_2,v_3,\dots,v_k\rangle$ then the statement that $1$ is at least one of the entries can be stated as $\exists i\in\{1,2,\dots,k\}$ such that $v_i=1$. That being said, I think it is still easier said in words. It should be pointed out, though, that this is not a totally useful property in terms of linear algebra since the property of "having an entry equal to one" while true according to one basis can be false according to a different basis.
Does there exist an infinite nilpotent group with finite center?
Yes, but such examples cannot be finitely generated. An infinite extraspecial group is an example. Let $p$ by a prime, and let $G$ be the group defined by the presentation $$\langle x_i\ (i \in {\mathbb Z} \setminus \{0\}),z \mid x_i^p=z^p = [x_i,z] = [x_i,x_j]=1\ (|i| \ne |j|),\,[x_i,x_{-i}]=z \rangle.$$ Then $Z(G) = \langle z \rangle$ has order $p$. This group could also be described as the central product of infinitely many copies of a nonabelian group of order $p^3$.
For $x\in X$ Prove $C(x)$ is a connected maximal in $X$
If maximality is not the definition, then probably the definition is $$C(x) = \bigcup \{ C \subseteq X: x \in C, C \text{ connected}\}$$ This is a connected set (which contains $x$), as a union of connected sets that all intersect (in $x$). The union is non-void as $C= \{x\}$ is always part of that union. And if $C$ is any connected set that contains $x$ then it is by definition one of the sets in the union that comprises $C(x)$ and so trivially $C \subseteq C(x)$, and so if $D$ is connected and $C(x) \subseteq D$, we know $x \in D$ so $D \subseteq C(x)$ by the above and hence $C(x) = D$. ($m$ is a maximal element in a partially ordered set $(P, \le)$ iff $$\forall p \in P: p \ge m \implies p=m$$ and we have shown that a component is a maximal element in the set of all connected subsets of $X$ ordered by $ \subseteq$.)
Inverse of a $2 \times 2$ covariance matrix
The inverse of the $2\times2$ matrix $$\begin{pmatrix}a&b\\c&d\end{pmatrix}$$ is $$\frac1\Delta\begin{pmatrix}\ \ d&-b\\-c&\ \ a\end{pmatrix}$$ as you can check by direct product.
Determining the Smith Normal Form
We do want to use a kind of Gaussian elimination, but you have to be careful since you should not multiply a row by anything other than $1$ and $-1$, and you should not add non-integer multiples of one row to another row. So we can get started simply enough: $$\begin{align*} \left(\begin{array}{rrrr} 2 & 4 & 6 & -8 \\ 1 & 3 & 2 & -1 \\ 1 & 1 & 4 & -1 \\ 1 & 1 & 2 & 5 \end{array}\right) &\to \left(\begin{array}{rrrr} 1 & 1& 2 & 5\\ 1 & 3 & 2 & -1\\ 1 & 1 & 4 & -1\\ 2 & 4 & 6 & -8 \end{array}\right) &&\to \left(\begin{array}{rrrr} 1 & 1 & 2 & 5\\ 0 & 2 & 0 & -6\\ 0 & 0 & 2 & -6\\ 0 & 2 & 2 & -18 \end{array}\right)\\ &\to \left(\begin{array}{rrrr} 1 & 1 & 2 & 5\\ 0 & 2 & 0 & -6\\ 0 & 0 & 2 & -6\\ 0 & 0 & 2 & -24 \end{array}\right) &&\to\left(\begin{array}{rrrr} 1 & 1 & 2 & 5\\ 0 & 2 & 0 & -6\\ 0 & 0 & 2 & -6\\ 0 & 0 & 0 & -30 \end{array}\right)\\ &\to\left(\begin{array}{rrrr} 1 & 1 & 2 & 5\\ 0 & 2 & 0 & -6\\ 0 & 0 & 2 & -6\\ 0 & 0 & 0 & 30 \end{array}\right). \end{align*}$$ This uses only elementary row operations. From this we see that the relations on your group are equivalent to: $$\begin{array}{rcccccccl} r_1&+&r_2&+&2r_3&+&5r_4 &= & 0\\ & &2r_2 & & & - & 6r_4 &=& 0\\ & & & & 2r_3 & - &6r_4 & = & 0\\ & & & & & &30r_4 & = & 0 \end{array}$$ These elementary row operations replace the relations on our original set of generators with a new set of relations which are equivalent to the original, in the sense that if the generators satisfy these relations, then they satisfy the original relations and vice-versa. We can now use elementary column operations, which also correspond to performing certain base changes. For example, subtracting five times the first column from the fourth column is equivalent to replacing $r_1$ with $r_1-5r_4$, which does not change the subgroup generated by $r_1,r_2,r_3,r_4$. Etc. Performing those elementary column operations, since the $(1,1)$ entry is the gcd of the entries on the first row (and similarly for the rest of the rows), we can eliminate all nondiagonal entries and end up with the diagonal matrix $$\left(\begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 2 & 0 & 0\\ 0 & 0 & 2 & 0\\ 0 & 0 & 0 & 30 \end{array}\right),$$ from which you can read off the structure of the group in question. The column operations corresponds to replacing our set of generators with a new set that generates the same group. For example, the operations we performed with the first column, subtracting the first column from the second, twice the first column from the third, and five times the first column from the fourth, correspond to replacing the original generator $r_1$ with the generator $r_1-r_2-r_3-5r_4$. This does not change the subgroup, because $$\langle r_1,r_2,r_3,r_4\rangle = \langle r_1-r_2-r_3-5r_4,r_2,r_3,r_4\rangle.$$ Similarly with the other operations. In the end we will have an abelian group generated by elements $a,b,c,d$, where, in terms of the original generators, we have $$\begin{align*} a & = r_1-r_2-r_3-5r_4\\ b &= r_2-3r_4\\ c &= r_3-3r_4\\ d &= r_4, \end{align*}$$ which yields the relations $$\begin{align*} a&=0\\ 2b&=0\\ 2c&=0\\ 30d&=0 \end{align*}$$ from which we can just read off the abelian group structure as well.
Show that $\sum_{d\mid f} \varphi(f/d) a^{|d|} \equiv 0 \pmod f$
I have finally written up a proof. See Corollary 3.78 in my report Do the symmetric functions have a function-field analogue?. (Caution: The numbering will probably change several times. See the PDF compiled the 4th of November 2016 for a frozen version whose numbering definitely agrees with my post here.) Most of my report (which is long, and for all the bad reasons) is off-topic to your question, since the proof (well, the second proof) of Corollary 3.78 is mostly self-contained. All you need to read (I think) are §1.1, §1.2, §3.14, and the second proof of Corollary 3.78. I suspect that you know most of §3.14 already, but I could not locate the proofs in the literature, so I found it easier to write them down.
Geometry without similarity and other higher concepts
Draw a line $l$ through $P$ parallel to $AB$ .Let $E$ be the point of intersection of $l$ and $FC$. Now $FSPE$ is a parallelogram thus $SP = FE$ (1).Now let $G$ be the point of intersection of $l$ and $AC$,then $\angle GPC=\angle ABC=\angle ACB$ and thus $PGC$ is isosceles and thus $CE=PQ$ (2),as they are both heights of the isosceles triangle to the equal sides. From (1) and (2) the original statement is derived.
Boundary of set A in discrete topological space
Sure. If you know the definition of $\partial A = \overline{A}\setminus A^{\circ}$, then this is clear, as for all subsets $A$ we have $A = A^{\circ}$ (all sets are open) and $A = \overline{A}$ (all sets are closed). Also, if $A \subset X$ and $x \in X$, the neighbourhood $\{x\}$ of $x$ cannot intersect both $A$ and $X \setminus A$, whatever $A$ is, so $x \notin \partial A$ (this is another equivalent definition of boundary).
Solve the following matrix equation $X'X=A$
A positive definite matrix $A$ has a unique positive definite square root. If we diagonalize $A$ as $A = U D U^*$ with $U$ a unitary matrix and $D$ diagonal, then this square root is $X = U \sqrt{D} U^*$ where $\sqrt{D}$ is the diagonal matrix whose elements are the square roots of the corresponding elements of $D$. If $A$ is real, $U$ is orthogonal and $X$ is real and symmetric.
Show that there is no simple group of order $6\cdot{p}^m$ for any prime $p$ and positive integer $m$
Go through each one of the primes, for $p=2$ and $p=3$ you will get $p^{m+1}.(6/p)$ Use the theorem that for any group of order $p^mb$ then G is simple if $\left\vert G \right\vert \mid b!$ and $p^m | (b-1)!$ that works for all cases apart from for $p=5$, for which you can count elements using Sylow subgroups and you will have too many. for $p \geq 7$ it is simple that $p \nmid 5!$
Axiomatic Metacategories
What MacLane is doing is basically providing a definition of what a category is in any theory of collections. Here he uses the term collection in a very technical sense as oppesed to set. Without going to much in the details you can think to collections as to thing for which it makes sense to say that they have elements. Sets are collections which are elements of other collections, these are the collections of theories like ZFC. As you pointed out this is required to build things like the category of sets. The problem is that if you had only collections that are sets then you wouldn't be able to provide a collection of all sets (which cannot be a set by Russell's paradox), hence you wouldn't be able to provide a category of categories. If you admit collections that aren't sets (as for instance in NBG) you become able to build a collection of all set (which will be a collection, though not a set). Hope this helps.
Proving continuity with two different metrics
Let $f_n \to f$ in the topology of $d^*$. Then $d(I(f_n), I(f)) =|I(f_n) - I(f)| = |I(f_n-f)| \leq I(|f_n - f|) = d^* (f_n,f)$. (I have used that $|\int f| \leq \int |f|$.) Now pick some arbitrary $\varepsilon >0$ and take $\delta = \varepsilon$ and this is all you need.
What does $a$ mean in Taylor series formula?
Any time you write use Taylor's formula to write a power series for a function, you must choose a point (of the domain) at which to evaluate the derivatives of the function. This point is called $a$. If the special choice $a=0$ is made, this yields Maclaurin's formula. It is "gibberish" to set $a \leftarrow x$ since $a$ really is a constant for a given Taylor expansion and $x$ really is a variable. You can, however, have a Taylor series (some infinite polynomial in $x$) and then make the substitution $x \rightarrow a$ and find out the value of $f(a)$ since all the derivative terms are multiplied by powers of zero in the series. For your coding, make a function which I here call "Taylor" and you should call something better. The simplest form of Taylor() takes the arguments $f$, $a$, $n$, $x$. Taylor() then computes the 0th through n-th derivative of $f$ at $a$, then evaluates Taylor's formula for the finite set of summation indices 1..n, returning the result. The following are improvements: Don't supply an $x$ and get back an object which has precomputed the $n+1$ coefficients and has an operator() or "evaluate" public member function that can do the finite sum using a supplied value of $x$. Arrange for whatever function actually does the final numerical sum to sort its arguments and sum from smaller in magnitude to larger in magnitude to control intermediate precision loss. Don't supply an $n$ or the $x$ but do supply a precision argument. The function that does the summing now needs to have a way to estimate the residual of the Taylor series (which means your $f$ has to be able to tell you how big its next derivative can be anywhere in some interval) and then set $n$ so that the residual will be less than the precision argument when evaluated at $x$. Construct function objects that just know their derivatives for any value of $a$. Numerically calculating the derivatives and then Taylor summing is horribly unstable. Extend your numerical system to include "common numbers" like $\sqrt{2}$, $\sqrt{a}$ and the like. Let your system pass weighted sums of these numbers around and then only reduce them to floating point at the last possible moment. This will preserve significantly more accuracy. (I.e., a result can be of the form $0.0167889 + 45.6 \sqrt{2} + 6. \sqrt{a}$ and only reduce this to a floating point number at the last possible time. Extend your numerical system to work with arbitrary symbolic expressions. Add a facility for simplifying these expressions so that the intermediate expression swell doesn't consume all available memory. Add a facility for computing the resulting approximations to arbitrary precision. (At this point, you've implemented quite a hunk of a computer algebra system...)
How do you find $x$ from $y=\operatorname{sinc}(x)$?
We cannot find a closed form for this, so you are right, numerical methods are needed. We can use any root finding approach. For Fixed Point Iteration, we have: $$x = 2 \sin x$$ This leads to this root found using WA We can also use Newton's Method with (converges in 4-steps for a start value of 2. ): $$f(x) = 2 \sin x - x$$ Here are the same results using WA. The root is: $$x = \pm ~1.895494267033981\ldots$$
How to manually draw graphs on a torus?
Some tips: Instead of drawing on a torus, it's enough to draw on a topologically equivalent shape (e.g. a surface of a regular 3D object with a hole, like a mug where the hole is between the mug and its handle). To help your imagination, just cut out a piece of paper and make (cut out) a hole in it, and draw your graph on it (and be aware you can go over the edge to the other side). You could try to draw your graph on a plane with a bridge (like a regular real-life bridge): there are two places where the bridge starts and ends and you can go over the bridge without crossing the edges underneath it. You could try to draw your graph on a simplified net of a torus, in particular two disks with holes (remember the outer and inner edges of the disks will be glued together). The best approach in my opinion would be to use just a normal rectangle with edges glued together (you glue together the horizontal edges and then also the vertical edges), it is a special case of simplified net of a torus. It means, that when you go over the edge, you come back from the other side (although mind your orientations, on the other side you "reappear" exactly at the same position you reached the first side). All the above pictures were created with Inkscape. You can find sources here (download raw and open in Inkscape): 1, 2, 3, 4, 5. I hope this helps $\ddot\smile$
What is the analogue of linear algebra for the quadratic case
Let $V$ and $W$ be finite-dimensional vector spaces over a field $K$ with $\operatorname{char} K \neq 2$. Let's first recall the following facts about linear tranformations: One has a natural isomorphism $W \otimes V^\ast \cong L(V,W)$, given by $(w \otimes f) \mapsto (v \mapsto f(v)w)$. In particular, recall that $L(V) := L(V,V) \cong V \otimes V^\ast$. Then, the trace $\operatorname{Tr} : L(V) \to K$ corresponds to the canonical map $\tau : V \otimes V^\ast \to K$ given by $\tau(v \otimes f) := f(v)$. Recall that for $n := \dim V$, $\wedge^n V$ is $1$-dimensional, so that $L\left(\wedge^n V\right)$ is also $1$-dimensional, with $L\left(\wedge^n V\right) = K \operatorname{Id}_{\wedge^n V}$. Then, for $T \in L(V)$, $\det(T) \in K$ is defined by $$\det(T) \operatorname{id}_{\wedge^n V} := \wedge^n T : v_1 \wedge \cdots \wedge v_n \mapsto Tv_1 \wedge \cdots \wedge Tv_n.$$ Let's now turn to quadratic forms. One defines a $W$-valued quadratic form on $V$ to be a map $q : V \to W$ such that $$b_q(v_1,v_2) := \tfrac{1}{2}(q(v_1+v_2)-q(v_1)-q(v_2))$$ defines a bilinear map $V \times V \to W$, which, by construction, is symmetric. Indeed, $q \mapsto b_q$ defines a canonical isomorphism from the space of all $W$-valued quadratic forms on $V$ to the space of all bilinear maps $V \times V \to W$, with inverse $b \mapsto (v \mapsto b(v,v))$. Hence, it really suffices to think about bilinear maps $V \times V \to W$, particularly the symmetric forms. One has a natural isomorphism from $W \otimes (V^\ast)^{\otimes 2}$ to the space of all bilinear maps $V \times V \to W$, given by $$w \otimes f_1 \otimes f_2 \mapsto \left((v_1,v_2) \mapsto f_1(v_1)f_2(v_2)w\right).$$ In particular, the symmetric maps (which correspond bijectively with the quadratic forms) correspond to $W \otimes S^2 V^\ast$. Conventionally, one observes that if $W = K$, then a bilinear form $b : V \times V \to K$ corresponds unambiguously to a linear transformation $B : V \to V^\ast$, $B(v) := b(\cdot,v) = b(v,\cdot)$. More abstractly, $K \otimes (V^\ast)^{\otimes 2} = V^\ast \otimes V^\ast \cong L(V,V^\ast)$ by the natural isomorphism $W \otimes V^\ast \cong L(V,W)$. Then, for any isomorphism $\phi : V^\ast \to V$ (e.g., from a choice of basis on $V$), $\phi \circ B \in L(V)$, so that one could define $\operatorname{Tr}_\phi(b) := \operatorname{Tr}(\phi \circ B)$, $\det_\phi(b) := \det(\phi \circ B)$. Observe, however, the dependence on $\phi$. In many applications, however, one works with $V = K^n$, which has a standard ordered basis, and hence a canonical isomorphism $\phi : (K^n)^\ast \cong K^n$, which is essentially the transpose map taking row vectors to column vectors. In any event, this is all the standard machinery. Of course, a bilinear form $V \times V \to V \otimes V$ naturally corresponds to an element of $(V \otimes V) \otimes (V^\ast \otimes V^\ast)$, which corresponds to a linear transformation $V \otimes V \to V \otimes V$ thanks to the natural isomorphism $V\ast \otimes V^\ast \cong (V \otimes V)^\ast$, in which case one could immediately define the trace and determinant of such a bilinear form to simply be the trace and determinant, respectively, of the corresponding element of $L(V \otimes V)$. However, I'm not sure this is anywhere close to being as useful or interesting as the usual constructions in terms of bilinear forms $V \times V \to K$.
dot product of vectors with not orthogonal basis
You aren't missing anything, (0.1) is the expression of the dot product with respect to an orthonormal basis. If the basis is not orthonormal you get (0.3). Notice that you do not just need the vectors to be pairwise orthogonal, but also of unit norm in order for (0.1) to hold.
Count the size of $|\{(b_1, \dots, b_n): (-1)^{b_1} a_1 + \cdots + (-1)^{b_n} a_n =0, b_i \in \{0,1\} \}|$ in $O(n)$ way
Let us rewrite the main condition as $$ \tag{1} (-1)^{b_1}a_1 +...+(-1)^{b_n} a_n = \sum\limits_{i: b_i = 1} a_i - \sum\limits_{i: b_i = -1} a_i = 0, $$ and hence the equivalent formulation of your question is to find the number of subsets of $\{1,...,n\}$ which partition $\{a_1,...,a_n\}$ into $2$ sets with equal sums. In this you have the partition problem which is NP-complete. In fact, the partition problem asks for existence of a single partition satisfying $(1)$, while you want to compute all possible ones, so no polynomial algorithm here, let alone linear $O(n)$ as you ask. There is, however, an approach via dynamic programming, which is easy to program and might suffice for many practical scenarios. I'm sketching the idea below, a python program (this is closest to pseudo-code and requires less explanation of the syntax), which, given a list of integers $a$, decides if there's a partition with equal sums. def partition(a): # a is the array in question # we decide whether there is a partition of a into equal sum subsets num = sum(a) if num%2 != 0: return False num = num//2 n = len(a) memo = (num+1)*[None] # will be used for memoization, see below for i in range(num + 1): memo[i] = (n+1)*[False] # memo[i][j] shows if the integer i can be represented by elements of a up to and including j for i in range(n+1): memo[0][i] = True # 0 can be represented by not choosing anything, the empty set for i in range(1, num + 1): for j in range(1, n + 1): memo[i][j] = memo[i][j-1] or ( i >= a[j-1] and memo[i - a[j-1]][j-1] ) # either integer i can be represented by elements a_0, ..., a_{j-1} # or i is represented by a_j and i - a_j by elements a_0,...,a_{j-1} return memo[-1][-1] # the last element of the last row The result of the function partition will give you whether such equal sum partition exists. Then, you can use backtracking algorithms and the values of the matrix memo to construct the actual partitions. The above is a well-known topic and you will find further details and plenty of resources on the internet.
If $X$ a compact metric space and $Y$ Hausdorff and $f: X \to Y$ a continuous surjection, then $Y$ is also a compact metric space
As was noted in the comments, you actually mean that $Y$ is a compact metrizable space: a metric space requires a specific metric. The usual argument is actually a bit more indirect than the one that you’re attempting to carry out; I’ll give you a map to follow and leave the details for you to work out. Let $\mathscr{B}$ be a base for $X$, and without loss of generality assume that $\mathscr{B}$ is closed under taking finite unions. Let $\mathscr{N}=\{f[B]:B\in\mathscr{B}\}$. The members of $\mathscr{N}$ need not be open, but in all other respects $\mathscr{N}$ behaves like a base for $Y$. Show that a set $U\subseteq Y$ is open iff for each $y\in U$ there is an $N\in\mathscr{N}$ such that $y\in N\subseteq U$. (You’ll need both the compactness of $X$ and the fact that $\mathscr{B}$ is closed under taking finite unions.) Such a family $\mathscr{N}$ is called a network for the space $Y$. Verify that $\mathscr{N}$ is closed under taking finite unions. Use the compactness and regularity of $Y$ to show that if $U$ is open in $Y$, and $y\in U$, then $y\in\operatorname{int}_YN\subseteq N\subseteq U$ for some $N\in\mathscr{N}$. Conclude that $\{\operatorname{int}_YN:N\in\mathscr{N}\}$ is a countable network of open sets and therefore is a countable base for $Y$. In fact there is a general theorem: If $X$ is a compact Hausdorff space, and $\mathscr{N}$ is a network for $X$, then $X$ has a base $\mathscr{B}$ such that $|\mathscr{B}|\le|\mathscr{N}|$.
Prove that the following are isomorphic as groups but not as rings
By Definition, a ring homomorphism $f: R \rightarrow R'$ must preserve addition and multiplication and must map the multiplicative identity of $R$ to the multiplicative identity of $R'$. In your example, the ring $R'=2\mathbb{Z}$ does not have a multiplicative identity. So the two rings are not isomorphic (there is no isomorphism, or even a homomorphism from one ring to the other, for that matter). To show that the rings $\mathbb{Z}[\sqrt{2}]$ and $\mathbb{Z}[\sqrt{5}]$ are not isomorphic, you can use your idea that $x^2=2$ has no solution in the latter ring. But you need to justify why this method works. Here is a proof. Suppose there is an isomorphism $f$ between these two rings that takes $a+b\sqrt{2}$ to $a'+b'\sqrt{5}$. Since $f$ must take the identity to the identity, $f$ takes 1 to 1' (here, 1' is the identity in the second ring, and actually equals the integer 1; the primes are just to make things clearer). Since $f$ preserves sums, $f$ must take $1+1$ to $1'+1'$. Now, $(0+1 \sqrt{2})(0+1\sqrt{2}) = 1+1$ in the first ring. We can apply $f$ to both sides. Since $f$ preserves sums and products, we get the equation $(x'+y'\sqrt{5})^2 = 2$, where $x'+y'\sqrt{5}$ is the image of $(0+1\sqrt{2})$ under $f$. This equation has no solutions, and so we get a contradiction. Thus, there does not exist an isomorphism $f$ from the first ring to the second.
Can we say $ \left\| \sum_{n=1}^{\infty} x_n\right\|_{X} \geq C \|x_N\|_{X}$?
Consider $(x_h)_h$ as $x_{2n}=\frac{(-1)^n}{n}$, and $x_{2n+1}=-x_{2n}$ . Then we end up with $0\geq C$, contradiction. Thanks to the comments, I was totally wong.
Show the intersection of a nonidentity normal subgroup and the center of P is not trivial
$M$, being normal, is the union of conjugacy classes (with respect to $P$), meaning a conjugacy class lies completely in $M$ or is disjoint from $M$. Since the size of a conjugacy class is either $1$ or a multiple of $p$ the number of singleton classes in $M$ is a multiple of $p$ ($M$ is also a p-group), moreover it is not $0$ since the class of the identity is one of them. So there are at least $p$ singleton classes in M. But these singletons are central elements in P, which proves the assertion. With thanks to @Myself for his comment.
Proof of sub-additivity for Shannon Entropy
Define the relative entropy (or Kulback-Leibler divergence) of the probabilities $P$ and $Q$, with $P\ll Q$, as $$D(P\|Q) = \sum_{x\in \Omega} p(x) \log \frac{p(x)}{q(x)}. $$ Using the inequality $\log a \leq a-1,$ for $ a>0$, it is easy to show that $$D(P\|Q) \geq 0.$$ (Just take $a=\frac{q(x)}{p(x)}$ with $p(x)>0$). But then \begin{align} H(X)+H(Y) - H(X,Y) &= \sum_{x,y\in \Omega} p(x,y)\log \frac{p(x,y)}{p_X(x)p_Y(y)} \\ &=D(P_{X,Y}\| P_X \otimes P_Y)\\ & \geq 0. \end{align} where $P_X$ and $P_Y$ are the marginal distributions of $X$ and $Y$, and the equality holds if $X,Y$ are independent.
Korteweg-de Vries equation: Term has evaporated
Well, after you non-dimensionalize away as many constants as you can, absorbing them into the normalizations of x and t, you note your remainder equation is mostly linear except in one term, which I isolate in the second term, together with the linear object of your puzzlement, $$ (\partial_t +\partial_x^3)\eta + \sigma \partial_x \Bigl (\eta^2/2 + 2\alpha \eta/3 \Bigr ) =0, $$ where different authors chose the freedom of this non-dimensionalization to tweak the respective constants in all kinds of ways... experiment with these options yourself. Here, you are meant to observe that the $\alpha$ component in the second term is phony, since you may complete the square in the second term, $$ \partial_x \Bigl ( (\eta +2\alpha/3)^2 - 4\alpha^2/9\Bigr ) = \partial_x (\eta +2\alpha/3)^2 , $$ so redefining $u=\eta+2\alpha/3$ completely eliminates it from the problem: it is a phantom parameter, $$ u_t+u_{xxx} + \sigma u u_x=0 . $$
Artin's lemma and some morphisms.
In fact, the general result is the following: Let $E$ be a field, and let $G\subseteq \operatorname{Aut(E)}$ be a finite subgroup. Let $F = E^G$ be the elements of $E$ which are fixed by all elements of $G$, i.e. $F = \{x\in E: g(x) = x, \forall g \in G\}$. Then the extension $E/F$ is Galois, with Galois group $G$. As to the proof, the difficult part is Artin's lemma, and the question you have is actually the easy part. By Artin's lemma, we know that the extension $E/F$ is finite. Now for any finite extension $E/F$, there are always inequalities: $$\#\operatorname{Aut}(E/F) \leq \#\operatorname{Hom}_F(E, \overline F) \leq \deg(E/F),$$ where: $\operatorname{Aut}(E/F)$ is the set of automorphisms of $E$ fixing elements in $F$; $\operatorname{Hom}_F(E, \overline F)$ is the set of embeddings of $E$ into $\overline F$, an algebraic closure of $F$, which are identity on $F$; $\deg(E/F)$ is the degree of the extension $E/F$. The first inequality becomes an equality if and only if $E/F$ is normal, and the second inequality becomes an equality if and only if $E/F$ is separable. Thus both equalities hold if and only if $E/F$ is Galois. In our case, we know that $G$ is a subgroup of $\operatorname{Aut}(E/F)$, hence we have $\deg(E/F) \geq \#\operatorname{Aut}(E/F) \geq \#G$; but Artin's lemma says that $\#G \geq \deg(E/F)$, so all terms are equal.
what is wrong in my answer and what will be the correct solution for this probability question ?
The rationale behind the solution is this: The game conductor has a particular number from 1 to 5 that he wants A and B to pick. Let us say "2". Then P(A picking 2) $= \frac{1}{5}$ P(B picking 2) $= \frac{1}{5}$ P(A picking other than 2) $= \frac{4}{5}$ P(B picking other than 2) $= \frac{4}{5}$ P(Both picking 2) $= \frac{1}{5}\cdot\frac{1}{5}$ P(A picks 2, B picks other than 2)$ = \frac{1}{5}\cdot\frac{4}{5}$ P(A picks other than 2, B picks 2) $= \frac{4}{5}\cdot\frac{1}{5}$ P(B picks other than 2, B picks other than 2)$ = \frac{4}{5}\cdot\frac{4}{5}$ They don't win the game when the last three scenarios happen $= \frac{1}{5}\cdot\frac{4}{5}+\frac{4}{5}\cdot\frac{1}{5}+\frac{4}{5}\cdot\frac{4}{5} = \frac{24}{25}$ They win the prize $= \frac{1}{5}\cdot\frac{1}{5} = \frac{1}{25}$ Obviously, the question has been worded badly. Your answer is correct for the wording of the question. Thanks Satish
Prove that the volume of the whole flow inside the pipe is equal to $k \pi R^4/2.$
You have to calculate the flow of $\vec{v}=v(r)\hat{u}_z$ through a section $\Sigma$ of the cylinder. In this notation, $\hat{u}_z$ is just the versor parallel to the cylinder axis. If we call $\hat{n}$ the normal versor to $\Sigma$, we get: $$\Phi_{\Sigma}(\vec{v}) = \int \langle \vec{v},\hat{n} \rangle d\Sigma = \int_0 ^ R v(r)\cdot 2\pi r dr$$ since the two vectors are parallel and for a circle we have $d\Sigma = 2\pi r dr$ (*). Completing the previous calculation returns the desired result. (*) Intuitively it is because you're "summing" all the circular annuli with infinitesimal width $dr$. If you want to be more precise, you just have to specify the parametrization of the circle and calculate its Jacobian. Then you get: $$\int_0 ^{2\pi} \int_0 ^R v(r) dr d \theta = \int _0 ^ R v(r) \cdot 2 \pi dr$$
Barycentric subdivision preserves geometric realization
Let us think of a function $\alpha:V_K\to [0,1]$ as a formal sum $\sum_{v\in V_K}\alpha(v)\cdot v$. The homeomorphism $|sd(K)|\to |K|$ is then given by mapping $S\in V_{sd(K)}=K$ to the formal sum $\sum_{v\in S}\frac{1}{|S|}\cdot v$, and "extending linearly". In other words, a point $\alpha\in|sd(K)|$ can be written as a formal sum $\sum_{S\in K} \alpha(S)\cdot S$ and then we map this formal sum to $|K|$ by replacing each $S$ with $\sum_{v\in S}\frac{1}{|S|}\cdot v$ to get the formal sum $\sum_{S\in K}\sum_{v\in S}\frac{1}{|S|}\cdot v$ which we can think of as a formal sum of elements of $V_K$. I'll leave it to you to show that this is a homeomorphism. (As a hint for showing it is a bijection, given a point of $|K|$, you can figure out what simplex of $|sd(K)|$ it is in the image of by ordering its nonzero coefficients from smallest to biggest. For instance, a sum like $0.2a+0.3b+0.5c$ for three vertices $a,b,c$ can be split up as $0.6(\frac{1}{3}a+\frac{1}{3}b+\frac{1}{3}c)+0.2(\frac{1}{2}b+\frac{1}{2}c)+0.2c$, to write it as a convex combination of expressions of the form $\sum_{v\in S}\frac{1}{|S|}\cdot v$.) The intuition here is that for each geometric simplex $|S|$ in $|K|$, we add a new vertex at the barycenter of $S$, the point $\sum_{v\in S}\frac{1}{|S|}\cdot v$ that is the average of all its vertices. We can then take convex combinations of the barycenters of $|S|$ and its faces to subdivide $|S|$ into many smaller simplices. For a very simple example, suppose $K$ is just a line segment, with two vertices and an edge connecting them. Then $sd(K)$ has three vertices: two are just the vertices that $K$ had, but one of them comes from the edge. We think of this new vertex as being the midpoint of the edge of $K$, which we can connect to the two vertices of $K$ to split the edge into two smaller edges. So, $|sd(K)|$ is just two line segments joined together, which is homeomorphic to a single line segment $|K|$.
How is this set ascending?
When you increase $n$, you make it easier for a point $x$ to be in $E_n$, so the sets $E_n$ are non-decreasing. For example, in order for $x$ to be in $E_2$, $x$ must satisfy $|f(x)-f_k(x)|<\eta$ for $k=2,3,4,\ldots$. In order for $x$ to be in $E_3$, however, $x$ need only satisfy the inequality for $k=3,4,5,\ldots$; it no longer has to satisfy $|f(x)-f_2(x)|<\eta$. More generally, if $m<n$, and $x\in E_m$, then $|f(x)-f_k(x)|<\eta$ for all $k\ge m$ and therefore automatically for all $k\ge n>m$, so $x\in E_n$, and $E_m\subseteq E_n$. However, if there is a point $x\in E$ that satisfies $|f(x)-f_k(x)|<\eta$ for every $k\ge n$ but not for $k=m$, then $x\in E_n\setminus E_m$.
A relationship among multiple periodic arrays
Some thoughts on the problem, but no full answer, so posting as CW. Feel free to edit. Note that the array of period $T_i$ has density $(T_i)^{-1}$ in the array c. Therefore by the pigeonhole principle it is necessary that $\sum_{i = 1}^N (T_i)^{-1} \leq 1$. Note that if $T_i$ and $T_j$ are relatively prime, then there exists a solution $(k,l)\in \mathbb{Z}^2$ for the equation $k T_i + l T_j = s$ for any given integer $s$. Therefore necessarily $\mathrm{gcd}(T_i,T_j) > 1$ for any two periods. Note that the above two together is not sufficient: for the set $T_1 = 2$, $T_2 = 4$, $T_3 = 6$ there does not exist any $s_i$ that makes a c that satisfies your condition. So what makes $\{2,4,6\}$ different from $\{4,6,8\}$? In both cases the GCD for the entire set is 2. But we can reduce the first case to the case $\{2,3\}$, using the fact that $s_2-s_1$ and $s_3-s_1$ must both be odd numbers for $\{4,6\}$ not to collide with $\{2\}$. Now, we can re-write your problem as follows. Given a list $\{T_i\}$ of positive integers, you want to find a list $\{s_i\}$ of integers in $[0,\mathrm{lcm}(\{T_i\}))$ such that for every $i,j$ the equation $$ k T_i + s_i = l T_j + s_j $$ has no solution. This shows that $s_i - s_j$ cannot be a multiple of $\mathrm{gcd}(T_i,T_j)$. So equivalently: Problem Statement: Given a finite list $\{T_i\}$ of positive integers, is it possible to find a list $\{s_i\} \subset \{0,1,\ldots,\mathrm{lcm}(\{T_i\})\}$ such that for every pair $i,j$, the number $\mathrm{gcd}(T_i,T_j)$ does not divide $s_i - s_j$. This gives us a way to interpret item 4 again. Let $\tau\subset \{T_i\}$ be a subset in which the GCD between any two element is the same number. Hence we have that for $a,b\in \tau$, $\mathrm{gcd}(a,b) = \mathrm{gcd}(\tau)$. By the pigeonhole principle we see that necessarily $\mathrm{gcd}(\tau) \geq |\tau|$. In the case of $\{2,4,6\}$, we can take $\tau$ equal the whole set, which has size 3, but $\mathrm{gcd}(2,4) = \mathrm{gcd}(4,6) = \mathrm{gcd}(2,6) = 2$. In the case of $\{4,6,8\}$ since $\mathrm{gcd}(4,8) = 4$, the pigeonhole principle does not provide an obstacle.
Proof of being a compact set
Since $X$ is compact, the image in $\mathbf{R}$ is compact (as $f$ is continuous). Let $y$ be the maximum of the image of $f$. Then $f^{-1}(y)$ is a closed set in a compact space, hence compact.
What is the name of this curve (figure inside)?
One can find a cubic, $y=ax^3+bx^2+cx+d$, looking like $D$. EDIT: If you want a curve that looks like the blue curve in the edited version of the question, you can pick a few points you want the curve to go through and then join them up with cubic splines. There is a lot of info about these on the web and in the more applied Linear Algebra textbooks and elsewhere, and while some of the formulas may look complicated, there's really nothing there that a computer can't handle quickly and efficiently.
Playing Dice and probabilities
Hint: Let $n $ denote the summation of the results of the first 6 throws. Then the set $\{n+i\mid i=1,\dots ,6\} $ contains exactly two elements that are multiples of 3. The point is that this is true for every integer $n $.
Finding coprime $p,q$ such that $pq=10! $ and $ p<q$
The number $10!$ has $4$ distinct prime factors, namely $2$, $3$, $5$, and $7$. Hence, it has $2^4=16$ unitary divisors. Of those $16$ unitary divisors, half of them, or $8$, are under $\sqrt{10!}$.
How to approach this modulo proof?
We could also use Euler's criterion: If $p$ is prime, and $a \neq 0$ then $a$ satisfies $m^2 \equiv a$ mod $p$ for some $m$ if and only if $a^{(p-1)/2} \equiv 1 $ mod $p$. So for our case, $0$ obviously works. Testing the remaining cases: \begin{align} a &amp;= 1,2,3,4,5,6 \\ a^3 &amp;= 1, 8, 27, 64, 125, 216 \\ &amp;\equiv 1, 1, -1, 1, -1,-1 \,\text{mod}\, 7 \end{align} So $0,1,2,4$ are the only solutions.
Elements of monoids divisible by arbitrary powers of a particular element
Recall that an epigroup (also called group-bound semigroup) is a semigroup in which every element has a power that belongs to a subgroup. Note that the identity of the subgroup is an idempotent which is not required to be an identity for the semigroup. Let $S$ be an epigroup and let $x \in S$. Then some power of $x$, say $x^n$, belongs to group $G$ with identity $e_x$. Note that $e_x$ is entirely determined by $x$. Indeed, suppose that $x^m$ belongs to some group $H$ with identity $e$. Then $x^{nm} \in G \cap H$ and thus $e_x = e$. Now, I claim that $y^n$ divides $e_y$ for all $n \geqslant 0$. Indeed, since $S$ is an epigroup, there exists $k &gt; 0$ such that $y^k$ belongs to a group $G$ with identity $e_y$. Let us choose $q$ and $r$ such that $n = kq - r$ with $q &gt; 0$ and $r \geqslant 0$. Then $y^ny^r = y^{kq} = (y^k)^q \in G$ and thus $y^n$ divides $y^ny^r$ which divides $e_y$. It follows that $y^n$ divides $x$ for all $n \geqslant 0$ if and only if $e_y$ divides $x$.
For fixed $n \in \mathbb{R}$, what are all the continuous functions that satisfy $nf(x) = f(nx)$?
Let's start with $n$ positive but not $1$. We can separate positive and negative $x$, so take any periodic continuous functions $g_1(x)$ and $g_2(x)$ with $g_1(x\pm 1)=g_1(x)$ and $g_2(x\pm 1)=g_2(x)$ - a possibility is they are both constant, while another is that they each have period $1$, and they can be the fame function for $x \gt 0$ let $f(x)=x g_1({\log_n (x)})$, for $x \lt 0$ let $f(x)=x g_2({\log_n (-x)})$, and $f(0)=0$ while for $n=1$ you simply have $f(x)$ is any continuous function and with $n=0$, almost as simply, $f(x)$ is any continuous function with $f(0)=0$ It gets slightly more complicated for negative $n \not = -1$. Take a continuous function $h(x)$ on $[0,1]$ with $h(1)=-h(0)$ and extend it to $\mathbb R$ with $h(x\pm 1)=-h(x)$, so if $h$ is ever non-zero then it has period $2$ and $|h|$ has period $1$ for $x \gt 0$ let $f(x)=x h({\log_{-n} (x)})$, for $x \lt 0$ let $f(x)=x h(1+{\log_{-n} (-x)})$, and $f(0)=0$ while for $n=-1$ you simply have $f(x)$ is any odd continuous function so $f(-x)=-f(x)$ and $f(0)=0$ In particular, $f(x)=0$ is a possible solution for every value of $n$
Probability exponential distribution.
We need to decide whether to measure time in hours or minutes. Say minutes. Then the mean number of business arrivals per minute is $\frac{10}{60}$. Thus we want to know the probability that an exponentially distributed random variable with parameter $\lambda=\frac{10}{60}$ is $\gt 3$. The probability that it is less than or equal to $3$ is $1-e^{-30/60}$. But the probability it is bigger than $3$ is $e^{-30/60}$. A similar calculation for personal customers yields $e^{-60/60}$. For no customers at all, by independence, we want the probability of no business customers times the probability of no personal customers. As to the last question, if we want to calculate, let $W$ be the waiting time until the next customer. The probability that $W\gt w$ is the probability that $X\gt w$ and $Y\gt w$, where $X$ and $Y$ represent the waiting times for the two types of customers. Calculate this. You will recognize it as an exponential random variable. Indeed, it turns out that the distribution of $W$ can be written down immediately from the fact that the combined arrival rate is $30$ per hour. Remark: Things will get clearer with practice, and when the relationship between the Poisson distribution and the exponential has been developed.
Let f and g be sequences in a valued field F. Assuming that f converges to 0 and g is bounded, show that fg converges to 0.
As you correctly stated, there exist $c$ and $M$ such that $|g_n|\leq c$ for all $n&gt;M$ and for all $\epsilon&gt;0$ there exists $N$ such that $|f_n|&lt;\epsilon$ for all $n&gt;N$. (this is using $g_n$ bounded and $f_n\rightarrow 0$ respectively). You want to show that for all $\epsilon&gt;0$ there exists $N$ such that $|f_ng_n|&lt;\epsilon$ for $n&gt;N$. We can find some $N'$ such that $|f_n|&lt;\frac{\epsilon}{c}$ for all $n&gt;N'$ using the fact that $f_n\rightarrow 0$. Then $|f_ng_n|&lt;\epsilon$ for all $n&gt;N=\max(M,N')$.
Why does the gradient of function $f$ only exists for function that outputs scalars?
The gradient of a differentiable function $f : \mathbb R^{m \times n} \to \mathbb R$ $$\nabla f (\mathrm X) : \mathbb R^{m \times n} \to \mathbb R^{m \times n}$$ is the matrix-valued function that produces the directional derivative of $f$ in the direction of $\mathrm V \in \mathbb R^{m \times n}$ at $\mathrm X \in \mathbb R^{m \times n}$ via the following Frobenius inner product $$\langle \mathrm V, \nabla f (\mathrm X) \rangle$$ If the output of a function is not a scalar, then its directional derivative will not be a scalar either. Hence, there is no way of producing the directional derivative via some inner product, as inner products produce scalars (by definition).
Is there a quicker way to work out highest common factors in $\Bbb{Z}[i]$?
Use the Euclidean algorithm for Gaussian Integers.
Do we have always $f(A \cap B) = f(A) \cap f(B)$?
I find a Venn diagram helpful. The set $f(A\cap B)$ is the purple on the RHS, whereas $f(A)$ is the union of purple and red and $f(B)$ is the union of purple and blue on the right. In order for $f(A\cap B)=f(A)\cap f(B)$, the red and blue regions in the codomain (the right) must be disjoint. However, it should be clear that, at least with sets, this is not generally the case, because all three regions on the right can be controlled for independently by controlling the function $f$ as one wishes. (This aid also helps with OJ's exercise.)
A quick question regarding hypothesis testing (decision variable)
You are somewhat correct, but not entirely. When dealing with two population data where the variances are unknown, you must first determine if the population variances are equal or not. This information could be given to you in the question/application or you may need to determine it yourself based on sample data. You can use the F test determine if variances are equal or not. In particular, you are testing $$H_0 : \frac{\sigma_1^2}{\sigma_2^2} = 1 \qquad H_1 :\frac{\sigma_1^2}{\sigma_2^2} \{&lt;, \neq, &gt; \} 1$$ and the $F$-statistic is given by $$F = \frac{s_1^2}{s_2^2}$$ with degrees of freedom $v_1 = n_1 - 1$ and $v_2 = n_2 -2$. Performing this test lets you know whether the population variances are equal or not. Note that this is only applicable when the populations is normally distributed and independent. In both cases, you are performing the $T$-test. If the variances are equal $$ t = \frac{(X_1 - X_2) - (\mu_1 - \mu_2)}{\sqrt{s_p^2 \left( \frac{1}{n_1} + \frac{1}{n_2} \right)}} $$ where $s_p$ is the pooled variance given by $$ s_p^2 = \frac{(n_1 - 1)s_1^2 + (n_2 - 1)s_2^2}{n_1 + n_2 -2}$$ Here, the degrees of freedom is given by $v = n_1 + n_2 -2$. If population variances are unequal, the test statistic is given by $$ t = \frac{(X_1 - X_2) - (\mu_1 - \mu_2)}{\sqrt{\frac{s_1^2}{n_1} + \frac{s_2^2}{n_2}}} $$ with degrees of freedom given by $$v = \frac{\left( \frac{s_1^2}{n_1} + \frac{s_2^2}{n_2} \right)^2}{\frac{(s_1^2/n_1)^2}{n_1 - 1} + \frac{(s_2^2/n_2)^2}{n_2 -1}}$$ Here, most likely $H_0 : \mu_1 - \mu_2 = 0$.
A Set of Independent random variables are shifted and maintain independence
Let $F_k(x_1,\ldots,x_k):=(x_1,x_2+x_1,\ldots,x_k+x_{k-1})$. Then $F_k$ is Borel measurable and for any Borel sets $B_1\subset \mathbb{R}^k$ and $B_2\subset \mathbb{R}^j$, \begin{align} &amp;\mathsf{P}((W_{t_1}',\ldots,W_{t_k}')\in B_1,(W_{s_1},\ldots,W_{s_j})\in B_2) \\ &amp;\qquad=\mathsf{P}((W_{t_1}',W_{t_2}'-W_{t_1}',\ldots,W_{t_k}'-W_{t_{k-1}}')\in F_k^{-1}(B_1)) \\ &amp;\qquad\quad\times\mathsf{P}((W_{s_1},W_{s_2}-W_{s_1},\ldots,W_{s_j}-W_{s_{j-1}})\in F_j^{-1}(B_2)) \\ &amp;\qquad =\ldots \end{align}
What is the term for the highest probability interval of a given length on the range of a continuous random variable?
Appears that "most probable interval" is as short as it gets. Nothing standardized exists for this term.
About localization of a finitely generated $k$-algebra
You only need to use the fact that the rings you're are working with are finitely generated. In the first example, note that by the definition of $P$, $k[T_1,\ldots,T_d][X]/(P(X))$ is just a finitely generated subring of $K(A)$. The second case is almost the same. Consider the generators of $k$-algebra $A$, pick $R$ for each one of them and finally take a product of all $R$'s.
Does a bijection between two sets $A$ and $B$ implies $P(a \in C ) = P(b \in C)$, if $A,B \subset C$?
Consider the normal distribution. You can define the bijection $f(x)=e^x$ from $\Bbb R$ to $(0,\infty)$. The probability of the former is $1$ and of the latter, $1/2$.
On how many ways can the teacher assign his grades
First, you choose which 20 students will be given an F. This is combinations without repetitions, because the same student can't be given an F twice, and order does not matter. So, there are $\binom{40}{20}$ ways to assign half of the class a grade of F. Then, for each of the 20 remaining students, you assign them an A, B, C or D. There are 4 different choices for each student. This is permutation with repetition (as multiple students can have the same grade) and where order matters. So, there are $4^{20}$ ways to grade the rest of the class. The total answer should be $\binom{40}{20}\times 4^{20}$.
If the sum of the differences of consecutive terms of a sequence between any two terms is less than or equal to one, then the sequence converges
Since you are a new learner in this topic, I avoid using the term of "series". Hopefully this answer is clear to you. Let $S_n=\sum_{k=1}^n|a_k-a_{k+1}|$ for $n\in\mathbb N$. By hypothesis, $S_n\leq 1$ and $S_n$ is increasing, so it is convergent. Given any $\epsilon&gt;0$, there exists $N&gt;0$ such that for any $n&gt;m&gt;N$, we have that $S_n-S_m&lt;\epsilon$, which means $|a_{m+1}-a_{m+2}|+\cdots+|a_{n}-a_{n+1}|&lt;\epsilon.$ By triangle inequality, we have that $|a_{n+1}-a_{m+1}|&lt;\epsilon$, so $\{a_n\}$ is a Cauchy sequence, as desired.
How to efficiently determine the number of samples required for a given statistic?
Do you think claimed percentages that you can read in articles are exact, or that they are most of the time rounded to one or two decimal places ? That being said, even if that was not the case, there is one single irreducible fraction that leads to the result, but you also have an infinite number of non-irreducible fractions so that bounding the numerator or denominator is not really informative for getting an upper bound on the pool size. To see this, think that if you double the number of people taken into the survey but the number of people who verify some predicate also doubles, your number does not change. It might only indicate lower bounds (I repeat that this is only true in the case where this value is the exact one).
A semisimple commutative Banach algebra with a non-semisimple quotient
Take $A$ to be the Wiener algebra $A(\mathbb T)$, the algebra of all continuous functions on $\mathbb T$ with absolutely convergent Fourier series; in other words, $A(\mathbb T)$ is the image of $\ell_1(\mathbb Z)$ under the Fourier transform. The product of $A(\mathbb T)$ is the pointwise product and the norm is induced by $\ell_1(\mathbb Z)$, $$\Vert f\Vert_A=\sum_{n\in\mathbb Z} \vert\widehat f(n)\vert$$ It is well known that the characters of $A(\mathbb T)$ are the evaluations at points of $\mathbb T$. In particular, $A$ is semi-simple. Moreover, if $I$ is a closed ideal in $A$ with hull $h(I)=E\subset\mathbb T$, then the characters of $A/I$ are exactly the evaluations $\delta_s$ with $s\in E$. In particular, if $f\in A$ is identically $0$ on $E=h(I)$, then the spectral radius of $[f]_{A/I}$ is $0$. For any closed set $E\subset \mathbb T$, denote by $I(E)$ the ideal of all $f\in A$ such that $f\equiv 0$ on $E$, and by $J(E)$ the ideal of all $f\in A$ such that $f\equiv 0$ on a neighbourhood of $E$. Since $A$ is a regular Banach algebra we have $h(\overline{J(E)})=E$. Moreover, by a famous result of P. Malliavin there exist $non$-$spectral$ $sets$ for $A(\mathbb T)$, i.e. closed sets $E\subset \mathbb T$ such that $\overline{J(E)}\neq I(E)$. Now, if $E$ is a non-spectral set for $A$, then $A/\overline{J(E)}$ is not semi-simple just by definition: if you take any $f\in I(E)$ which is not in $\overline{J(E)}$, then $[f]$ is not $0$ in $A/\overline{J(E)}$ but has spectral radius $0$.
Normalising angled Earth magnetic field
I've got the answer on StackOverflow by Spektre: I think you need to create local reference coordinate system similar to NEH (north,east,height/altitude/up) something like Representing Points on a Circular Radar Math approach. Its commonly used in aviation as a reference frame (heading is derived from it) so your reference frame is computed from your geo location and its axises pointing to North, East and Up. Now the problem is what does it mean aligned North/South and normalizing.. ? If reference device measure just projection than you would need to do something like this: dot(measured_vector,reference_unit_direction) where the direction would be the North direction but as unit vector. If the reference device measure a full 3D too then you need to transform both reference and tested measured data into the same coordinate system. That is done by using transform matrices So simple matrix * vector multiplication will do ... Only then compute the values H,F,Z which I do not know what they are and too lazy to go through papers ... would expect E,H or B vectors instead. However if you do not have the geo location at moment of measure then you have just the North direction in respect to the ISS in form of Euler angles so you can not construct 3D reference frame at all (unless you got 2 known vectors instead of just one like UP). In such case you need to go with the option 1 projection (using dot product and north direction vector). So you will handle just scalar values instead of 3D vectors afterwards. [Edit1] From the link of yours: The geomagnetic field vector, B, is described by the orthogonal components X (northerly intensity), Y (easterly intensity) and Z (vertical intensity, positive downwards); This is not my field of expertise so I might be wrong here but this is how I understand it: B(Bx,By,Bz) - magnetic field vector a(ax,ay,az) - acceleration Now F is a magnitude of B so its invariant on rotation: F = |B| = sqrt( Bx*Bx + By*By + Bz*Bz ) you need to compute the X,Y,Z values of B in the NED reference frame (North,East,Down) so you need the basis vectors first: Down = a/|a| // gravity points down North = B/|B| // north is close to B direction East = cross(Down,North) // East is perpendicular to Down and North North = cross(East,Down) // north is perpendicular to Down and East, this should convert North to the horizontal plane You should render them to visually check if they point to correct directions if not negate them by reordering the cross operands (I might have the order wrong I am used to use Up vector instead). Now just convert B to NED : X = dot(North,B) Y = dot(East,B) Z = dot(Down,B) And now you can compute the H H = sqrt( X*X +Y*Y ) The vector math needed for this you will find in the transform matrix link above. beware this will work only if no accelerations are present (the sensor is not on a robotic arm during its operation, or ISS is not doing a burn...) Otherwise you need to obtain the NED frame differently (like from onboard systems) If this not work correctly then you can compute NED from your ISS position but for that you would need to know the exact orientation and displacement of the sensor in respect to your simulation model that provide your location. I do not know what rotations ISS do so I would not touch that subject unless as a last resort. I am afraid that I will not have time for coding for some time ... anyway coding without sample input data nor the coordinate system expalnations and all the input/output variables is insanity ... simple negation of axis will invalidate the whole thing and there is a lot of duplicities along the ways and to cover all of them you would end up with many many versions to try to... Apps should be build up incrementally but I am afraid that without the access to simulation or real HW that is not possible. And there is a whole bunch of things that could go wrong ... making even simple programs a magnitude harder to code... I would first check the F as it does not require any "normalization" first to see if the results are off or not. If off it might suggest different units or god knows what ...
Central limit theorem
Letting $$Z= \frac{X_1+ \cdots + X_n - n \mu}{\sqrt{n}}$$ and knowing that $Z \to N(0, \sigma^2)$ then we know that $E(Z^k)\to \sigma^k (k-1)!!$ if $k$ is even, $0$ otherwise.
Why if $\phi\in\mathrm{Aut}(L)$ and $x\in L$ then $\phi(\operatorname{ad} x)\phi^{-1}=\operatorname{ad}\phi(x)$?
Sure. An automorphism of a Lie algebra respects all of its structure, including the bracket. So for $\phi\in\operatorname{Aut}L$ and $x,y\in L$ we have $\phi([x,y])=[\phi(x),\phi(y)]$.
Are there any certainties of properties of infinity?
You're talking specifically about the projective compactification of the number line, and the issue is that in that setting it doesn't make sense to multiply numbers together. Or, more specifically, you just cannot multiply $0 = [0:1]$ by $\infty = [1:0]$, since this would result in $[0:0]$ which does not represent a point on $\mathbb{P}^1$ at all. Read more: http://en.wikipedia.org/wiki/Projective_line
Order between probability measures: sets full below
I think there is a typo in the definition of the order on $X$, I think you mean $\forall x \in \mathbb Z$? I'm going to assume that you consider monotone functions $f: X \to \mathbb{R}_{\geq 0}$ such that $f$ is measurable. Potentially you could get rid of the measurability assumption by assuming that the $\sigma$-algebra contains all sets full-below(equivalently full-above) and then show that all monotone functions are measurable. I define $$C \subset X \text{ full-above } \Longleftrightarrow \forall x \in C: x \prec y \Rightarrow y \in C.$$ We observe that $C$ is full-above iff $C^c$ is full-below. For let $x \in C^c$ assume that $y \prec x$ and $y \in C$ this would imply $x \in C$ a contradiction. Now to prove the statement, we will as usual start by proving it for simple functions, so we first need to identify the simple monotone functions. Assume that we have a simple function $f$ that only takes 2 values $0 = a_0$ and $a_1 &gt;0$, then we must have $f = a_1 1_{C}$ i.e. the indicator function on some set $C$. Assume $x \in C$ and $x \prec y$ then $a_1 = f(x) \leq f(y)$ so we must have $f(y) = a_1$, hence $C$ is full-above. But then $C^c$ is full-below and we must have $$1-\mu(C) = \mu(C^c) \geq \nu(C^c) = 1 - \nu(C).$$ This implies that $\mu(C) \leq \nu(C)$ and hence $\int f d\mu \leq \int f d\nu$(remember our coefficients are non-negative). Assume that we have a simple, monotone function $f = \sum_{i=0}^n a_i 1_{C_i}$ with $0 = a_0 &lt; a_1 &lt; \dots &lt; a_n$ and the $C_i$ disjoint. As before we see that $C_n$ must be full-above. Similarly we can conclude that $D_m \cup_{i=m}^n C_i$ is full-above for all $m$. But this gives us $$ a_1\mu(D_1) \leq a_1 \nu(D_1).$$ But we also see that $$ (a_2 - a_1) \mu(D_2) \leq (a_2 - a_1) \nu(D_2) $$ adding the two in equalities give $$ a_1\mu(C_1) + a_2 \mu(D_2) \leq a_1 \nu(C_1) + a_2 \mu(D_2). $$ In general we see that $$ (a_m - a_{m-1})\mu(D_m) \leq (a_m - a_{m-1}) \nu(D_m) $$ hence by induction $$ a_1\mu(C_1) + \dots + a_{m-1}\mu(C_{m-1}) + a_m \mu(D_m) \leq a_1 \nu(C_1) + \dots + a_{m-1}\nu(C_{m-1}) + a_m \nu(C_m).$$ But this proves $$ \int f d\mu \leq \int f d\nu.$$ Now the only thing left is to realise that we can approximate any monotone function by simple monotone functions. But observe that if $f$ is monotone then $f^{-1}([a, \infty))$ is full-above, similarly $f^{-1}((a, \infty))$ is full-above. Hence for $m &gt; 0$ put $C_i = f^{-1}([\frac{i}{2^m}, \frac{i+1}{2^m}))$ and $a_i = \frac{i}{2^m}$ for $i \leq n = m*2^m$ then if we set $$ g_m = \sum_{i=0}^n a_i 1_{C_i}. $$ We must have $g_{m} \leq g_{m'}$ for $m \leq m'$ and $\lim g_m(x) = f(x)$ i.e. we have monotone, point-wise convergence of positive functions. Furthermore we see that the $g_m$ are monotone since $D_i = f^{-1}([\frac{i}{2^m}, \infty))$ is full-above for all $i$. Thus we get $$ \int f d\mu = \lim_{m \to \infty} \int g_m d\mu \leq \lim_{m \to \infty} \int g_m d\nu = \int f d\nu. $$
Is modulus operation necessary to define $\mathbb{C}$?
Definition: A field is a set $k$ with two binary operations $+,\cdot : k\times k\to k$ satisfying the following axioms: For all $a,b,c\in k$, $(a + b) + c = a + (b + c)$. For all $a,b\in k$, $a + b = b + a$. There exists $0\in k$ such that $a + 0 = a$ for all $a\in k$. For every $a\in k$, there exists $a'\in k$ such that $a + a' = 0$. For all $a,b,c\in k$, $(a\cdot b)\cdot c = a\cdot (b\cdot c)$. For all $a,b\in k$, $ab = ba$. There exists $1\in k\setminus\{0\}$ such that $1\cdot a = a$ for all $a\in k$. For all $a,b,c\in k$, $a\cdot(b + c) = a\cdot b + a\cdot c$. For all nonzero $a\in k$, there exists $\tilde{a}\in k$ such that $a\cdot\tilde{a} = 1$. Note that there is no &quot;modulus&quot; involved in the above definition. This is all a very long-winded way to say that $\Bbb C$ as a field is essentially only determined by its addition and multiplication, so that the answer to &quot;can a field $\Bbb D$ possess the same operations of addition and multiplication as $\Bbb C$, but a different field structure?&quot; is no. There are many fields which are algebrically &quot;the same&quot; as $\Bbb C$ (that is, they are isomorphic as fields). Here are a few: $(\Bbb C,+,\cdot)$, where $\Bbb C = \{a + bi\mid a,b\in\Bbb R\}$ with the &quot;obvious&quot; addition and multiplication subject to the rule that $i^2 = -1$, $(\Bbb R^2, + , \cdot)$, where $(a,b) + (c,d) = (a + c, b + d)$ and $(a,b)\cdot (c,d) = (ac - bd, ad + bc)$, the algebraic closure of the field $\Bbb R$ of real numbers, $\Bbb R[x]/(x^2 + 1)$, the unique (up to isomorphism) algebraically closed field of characteristic $0$ with cardinality $2^{\aleph_0}.$ However, it is possible to put different notions of modulus on the same field. As mentioned in the comments, the modulus is providing topological/geometric information (it is essentially giving a way to measure distance in your field), not algebraic information. Let us define a modulus on a field in general: Definition: Let $(k,+,\cdot)$ be a field. Then a modulus on $k$ is a function $v : k\to\Bbb R$ such that $v(a)\geq 0$ for all $a\in k$, $v(a) = 0$ if and only if $a = 0$, $v(ab) = v(a)v(b)$ for all $a,b\in k$, $v(a + b)\leq v(a) + v(b)$ for all $a,b\in k$. The distance between two points $a,b\in k$ can then be defined as $v(a - b)$, and you can check that this satisfies the usual properties one would expect of a notion of distance. We can now talk about valued fields: these are fields with a choice of modulus on that field; i.e., a valued field is the data of $(k,+,\cdot,v)$, where $(k,+,\cdot)$ is a field, and $v$ is a modulus on $k$. It is possible to have two valued fields whose underlying algebraic field structure is the same, but whose moduli are not equivalent. For example, take the first valued field to be $\Bbb C$ with the usual modulus, as you've defined, and take the second to be $\Bbb C$, but now with the following modulus: $v(a + bi) = 1$ for all nonzero $a + bi\in\Bbb C$, and $v(0) = 0$. This is kind of a silly modulus, but it should be intuitively obvious that this is different than the normal modulus on $\Bbb C$. There are other more complicated examples as well. You can also look here for a more thorough discussion, or just google search &quot;absolute value on a field.&quot; When one speaks of the complex numbers, one usually means $\Bbb C$ as a valued field, not just as a field, because the geometric/topological structure determined by the usual modulus on $\Bbb C$ is very important. Complex geometry and complex analysis are very rich subjects with many beautiful results that use the geometry coming from the usual modulus on $\Bbb C$ in a crucial way. These results are generally not transferrable to valued fields whose underlying field is $\Bbb C$ but have a different modulus.
A problem on Measure preserving dynamical system
This answer is just rewriting of the above comment by 'anthonyquas'. In an Ergodic dynamical system $(X,\mathscr{{B}},\mu,T)$, $\mathbb{{E}_{\mu}}(f|T^{-1}\mathscr{B})=0$ may not imply $f=0$ a.e in $\mathscr{B}$. For example, consider $X=\{0,1\}^{\mathbb{N}}$ with the induced measure $\mu$ from the measure $\nu$ on $\{0,1\}$ defined by $\nu(0)=\nu(1)=\frac{1}{2}$, and $T:X\rightarrow X$ the left shift map defined by $T(x_{0},x_{1},x_{2},...)=(x_{1},x_{2},...)$. Then this system is Ergodic. But, consider the continuous function $f:X\rightarrow\mathbb{R}$ defined by $$ f(x_0,x_1,x_2, ...)= \begin{cases} 1 &amp; \text{ if }x_0=0 \\ -1 &amp; \text{ if }x_0=1 \end{cases} $$ Then for any $A\in\mathscr{B},T^{-1}A=\{0,1\}\times A$, and therefore $$\int_{T^{-1}A}fd\mu=\int_{T^{-1}A\cap \{x_0=0\}}fd\mu + \int_{T^{-1}A\cap \{x_0=1\}}fd\mu=\frac{\mu (A)}{2}-\frac{\mu (A)}{2}=0.$$ So, in this case $\mathbb{E_{\mu}}(f|T^{-1}\mathscr{B})=0$ but $f\neq0$ in $\mathscr{B}$.
Can the Inverse Finding the Laplace transform of $\frac{2s + 1}{s(s + 1)(s + 2)}$ without using partial fractions?
It is not right away the convolution of two functions but you can split into two fractions and use convolution on each one and add the results .
A space homotopy equivalent to its subspace implies the inclusion map is a homotopy equivalence?
Consider the inclusion of the circle of radius one with center at the point $(10,0)$ into the space $X=\mathbb R^2-\{(0,0)\}$.
Need help to complete/correct a proof of the spherical law of sines
Okay, I think I have an answer to my own questions, so I wanted to report the findings. For reference, here is the pic of the wiki section that I referenced (in case it gets edited). As one of the commenters pointed out, it is conventionally assumed that in all spherical triangles each side and angle is $&lt; \pi$. This means I can drop the absolute value signs by assumption, and everything holds exactly as expected. However, I think you can still prove the law of sines in the general case. The reader should be careful and check my work for errors here. The answers to my question are as follows. In the general case, we can prove the law of sines in a different way than the way I presented in my original post. Given our triangle (vertices $A, B, C$ and sides $a, b, c$), let $\vec{u}, \vec{v}, \vec{w}$ be the unit vectors from the origin and pointing to vertices $A, B, C$, respectively. Using rotations and reflections, reorient the triangle so that $$ \vec{u} = (0, 0, 1), \quad \vec{v} = (\sin c, 0, \cos c), \quad \vec{w} = (\sin b\cos A, \sin b\sin A, \cos b). $$ Remember that because we are working on the unit sphere, the angle made between two vectors is the same as the arc length on the sphere from one vector head to the other (this applies even when we make the angle $&gt;\pi$). Then a certain triple product gives us $$ \vec{u}\cdot(\vec{v}\times\vec{w}) = \begin{vmatrix} 0 &amp; 0 &amp; 1 \\ \sin c &amp; 0 &amp; \cos c \\ \sin b\cos A &amp; \sin b\sin A &amp; \cos b \end{vmatrix} = \sin c\sin b\sin A. $$ The triple product is invariant under cyclic permutations of the vectors. By reorienting the coordinate system (keeping the orientation the same) and applying the same identity, we obtain $$ \sin c\sin b\sin A = \sin a\sin c\sin B = \sin a\sin b\sin C. $$ By dividing by $\sin a\sin b\sin c$ (no side length can be equal to $\pi$ or else we wouldn't have a triangle so this is a nonzero quantity), we obtain the law of sines. I think this reasoning works even when some of the angles and sides are $&gt; \pi$. Concerning the formula in the wiki page: If we're assuming all angles and sides are $&lt; \pi$, then we can ignore the absolute value signs. Otherwise you need them (I may make a counterexample if someone requests it). In any case, I still take issue with the fact that they didn't justify why the numerator is nonnegative before square rooting it, even though showing it is pretty trivial in my original post. The original approach works only if you assume all angles and sides are $&lt;\pi$. Otherwise, you do need to do a completely different approach taken in the answer to #1. It's just easier that way.
Convergence testing of the improper integral $\int_{0}^{\infty}\frac{\ln x}{\sqrt{x}(x^2-1)}\ \text dx$
$$ \int_0^2 \frac{\ln{x}}{\sqrt{x}(x^2-1)}dx = 2 \int_0^{\sqrt{2}} \frac{\ln{y^2}}{(y^4-1)}dy = 4 \int_0^{\sqrt{2}} \frac{\ln{y}}{(y^4-1)}dy.$$ (substituting $x = y^2$) Then we have polynomials and so can apply partial fractions to get: $$ -2 \int_0^{\sqrt{2}} \frac{\ln{y}}{(y^2+1)}dy - \int_0^{\sqrt{2}} \frac{\ln{y}}{(y+1)}dy + \int_0^{\sqrt{2}} \frac{\ln{y}}{(y-1)}dy.$$ Note that the first two of these are upper bounded by $\ln{y}$ and since $\int_{0}^\sqrt{2} \ln{y} dy = y \ln{y} - y \mid_0^\sqrt{2} &lt; \infty$ we only have to worry about $$\int_0^{\sqrt{2}} \frac{\ln{y}}{(y-1)}dy = \int_0^{\sqrt{2}} \frac{1}{y-1} \sum_1^\infty \frac{(-1)^{k-1}(y-1)^{k}}{k} dy = \int_0^\sqrt{2} \sum_1^\infty \frac{(-1)^{k-1}(y-1)^{k-1}}{k} dy.$$ (using the power series for $\ln y$) We can then swap the integral and sum by uniform convergence of a power series on it's interval of convergence (to be totally rigorous you should have some $\epsilon$ rather than $0$ as the lower bound of the integral at this point): $$= \int_0^\sqrt{2} \sum_1^\infty \frac{(1-y)^{k-1}}{k} dy = \sum_1^\infty \frac{-(1-y)^k}{k^2}\mid_0^\sqrt{2} = \sum_1^\infty \frac{-(1-\sqrt{2})^k}{k^2} + \frac{1}{k^2} &lt; \infty$$ (since $\sum 1/k^2$ converges, and the first term is upper bounded by this as well)
real values of $x$ in $\sqrt{5-x} = 5-x^2$.
$$ \begin{align} \sqrt{5-x}&amp;=5-x^2\\ 5-x &amp;= \left(5-x^2\right)^2\\ 5-x &amp;= x^4-10x^2+25\\ x^4-10x^2+25-5+x &amp;= 0\\ x^4-10x^2+x+20 &amp;= 0\\ (x^2-x-4)(x^2+x-5) &amp;= 0 \end{align} $$ $$ \begin{align} x^2-x-4=0 &amp;\vee x^2+x-5=0\\ x=\frac{-(-1)\pm\sqrt{(-1)^2-4\cdot 1 \cdot (-4)}}{2\cdot 1} &amp;\vee x=\frac{-1\pm\sqrt{1^2-4\cdot 1 \cdot (-5)}}{2\cdot 1}\\ x=\frac{1\pm\sqrt{17}}{2} &amp;\vee x=\frac{-1\pm\sqrt{21}}{2} \end{align} $$ Two of the $4$ solutions are good: $$ \begin{align} x_1 &amp;= \frac{1-\sqrt{17}}{2}\\ x_2 &amp;= \frac{-1+\sqrt{21}}{2} \end{align} $$ I don't see the positive thing in using geometry! This is the fastest way.
Limit $\lim_{x \rightarrow -\infty} (1 + x^3) = -\infty.$ by epsilon delta
Let $M &lt; 0$ be given, choose N such that $N &lt; \sqrt[3]{M-1}$. For $x &lt; N$, we have: $1 + x^3 &lt; 1 + (M - 1) = M$. This proves the result.
How to find all solutions of the ODE $x'=3x^{\frac{2}{3}}, x(0)=0$
First of all, we can build $x'$, so $x$ is continuous. Since $x^{2/3}=(x^{1/3})^2$, we have $x'\ge 0$, so $x$ is increasing. Then $x$ is either constant, zero, on $J:=[0,\infty)$, or there exist a maximal $t_0=\sup x^{-1}(\ \{0\}\ )=\max x^{-1}(\ \{0\}\ )$ in $J$ so that $x(t_0)=0$. We will only consider this last case, $t_0&lt;\infty$. Then we have $x'&gt;0$ on $(t_0,\infty)$. Now we forget about the condition in $0$ for a while, solve the given differential equation in $(t_0,\infty)$, knowing $x&gt;0$. On $(0,\infty)$, the function $t\to t^{1/3}$ is of class $C^1$, so we can build as compositum the differentiable function $$ y = x^{1/3}\ , $$ which satisfies by the chain rule $$ y'= (x^{1/3})' =\frac 13x^{-2/3}\cdot x' =\frac 13x^{-2/3}\cdot 3 x^{2/3}=1\ . $$ From $y'=1$ on $(t_0,\infty)$ we find a constant $C$ with $y(t)=t+C$ on $(t_0,\infty)$. So far we know about $x$ that it is $0$ on $[0,t_0]$, it is $t\to y^3(t)=(t+C)^3$, and it is continuous in $t_0$. The only matching constant is $C=-t_0$. We get the solutions and only the solutions from the OP. (It is clear that these are of class $C^1$ and satisfy the given differential equation.)
An equivalent definition of connectedness in General Toplogy
Let's prove "if $X$ is connected, then so is any space homeomorphic to $X$". So let $X$ be connected, and let $Y$ be homeomorphic to $X$, which means there is a function $h: X \rightarrow Y$ that is 1-1, onto (so a bijection) such that $h$ and $h^{-1}$ are both continuous. Suppose (for a contradiction) that $Y$ is not connected, so there exists a separation $U,V$ of $Y$. So $U,V$ are both non-empty and open in $Y$, disjoint and their union is $Y$. Then by continuity of $h$, $h^{-1}[U]$ and $h^{-1}[V]$ are open, and as (for all functions) $\emptyset = h^{-1}[U \cap V] = h^{-1}[U] \cap h^{-1}[V]$, these sets are also disjoint, and also $X = h^{-1}[U \cup V] = h^{-1}[U] \cup h^{-1}[V]$, their union is $X$. Both are non-empty as $h$ is surjective. So $h^{-1}[U],h^{-1}[V]$ are a partition of $X$ which cannot be, as $X$ is connected. Contradiction. Note that we only need $h$ onto and continuous. Being 1-1 and the inverse being continuous are not needed. In fact, we showed that the continuous image of a connected space is connected, which is uusally one of the first things to prove about connectedness. The remark from Munkres only meant to say, I think, that connectedness is entirely defined in terms of open sets and set theory (disjointness, union) and so will be a so-called topological property, so any space homeomorphic to it also has it. E.g. metric completeness is not such a property. I don;t think it was meant as an alternative definition at all, just a quick remark about the property being clearly a topological one.
Decomposition of a polynomial over generators of an ideal
This can be done with SAGE or Singular. http://www.sagemath.org/doc/reference/polynomial_rings/sage/rings/polynomial/multi_polynomial_ideal.html http://ask.sagemath.org/question/8827/find-specific-linear-combination-in-multivariate-polynomial-ring/