title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Understanding summation decreasing index
$|4-4| = -4 + 4 = -4 + 4\cdot (0.1)^0$. So $$|4-4|+\sum_{n=1}^{\infty} 4\cdot |0.1|^n = -4 + 4\cdot (0.1)^0 + 4\sum_{n=1}^\infty (0.1)^n = -4+4\sum_{n=0}^\infty (0.1)^n$$
Counterexample for non-homotopic vector bundles
It does not make any sense to say $F|_{X\times \{0\}}=E_1$. Two vector bundles (or groups or manifolds or vector spaces) cannot be equal to each other unless they are the same set. These equalities need to be replaced by isomorphisms, and in that case the obvious extension of either bundle to $X\times [0,1]$ gives the desired homotopy. If $E_1$ and $E_2$ are say sub-bundles of some fixed bundle (e.g. the tangent bundle), then such an equality could be made meaningful. In fact, the classification of sub-bundles of $TX$ up to homotopy is not the same as sub-bundles of $TX$ up to isomorphism. For instance consider the following two trivial line bundles in $T(S^2 \times \Bbb R)$. Let $e_0$ be the line bundle corresponding to the $\Bbb R$ direction, and let $e_1$ be the line bundle corresponding to the first coordinate in a trivialization of $T(S^2\times \Bbb R)$ (say by embedding $S^2\times \Bbb R$ into $\Bbb R^3$ smoothly). Fix a Riemannian metric on $S^2 \times \Bbb R$ and let $e_0^\perp$ and $e_1^\perp$ be the corresponding orthogonal complements of $e_0$ and $e_1$. If $e_0$ and $e_1$ were homotopic through line fields $e_t$, then $e_t^\perp$ would give a homotopy through plane fields from $e_0^\perp$ to $e_1^\perp$. Then by the result in the question we would have $e_0^\perp$ and $e_1^\perp$ were isomorphic as vector bundles. By the hairy ball theorem, this is not the case.
Prove that an open ball is also closed in an ultrametric space
You don’t need to use the machinery in that PDF to prove it. By definition the ball $B(x,r^-)$ is open. Suppose that $y\in X\setminus B(x,r^-)$. Then $d(x,y)\ge r$, and I claim that $B(y,r^-)$ is an open ball around $y$ disjoint from $B(x,r^-)$. Assuming the claim for a moment, it follows that $X\setminus B(x,r^-)$ is open and hence that $B(x,r^-)$ is closed. To prove the claim, suppose that $z\in B(y,r^-)\cap B(x,r^-)$. Then $$d(x,y)\le\max\{d(x,z),d(z,y)\}<r\;,$$ since $d$ is an ultrametric, contradicting the choice of $y$. This shows that every open ball is also closed. Now we’ll show that the closed ball $B(x,r)$ is also open. Let $y\in B(x,r)$ be arbitrary; I claim that $B(y,r^-)\subseteq B(x,r)$, from which it follows immediately that $B(x,r)$ is open. To prove the claim, let $z\in B(y,r^-)$, so that $d(z,y)<r$. We also have, $d(x,y)\le r$ by the choice of $y$. Thus $$d(z,x)\le\max\{d(z,y),d(y,x)\}\le r\;,$$ so $z\in B(x,r)$. Added: However, it’s important that you learn to work with partitions and equivalence relations: they’re pretty nearly ubiquitous in mathematics, and they’re very handy tools. For the basics see the Wikipedia articles on equivalence relations, partitions, and equivalence classes and/or this page. If $\langle X,d\rangle$ is an ultrametric space, and $r$ is any positive real number, we define a relation $\sim_r$ on $X$ as follows: $$x\sim_r y\quad\text{ iff }\quad d(x,y)\le r\;.$$ Clearly $x\sim_r x$ for every $x\in X$, since $d(x,x)=0\le r$, so $\sim_r$ is reflexive. The ultrametric $d$ is a symmetric function, so $x\sim_r y$ iff $y\sim_r x$, and $\sim_r$ is therefore symmetric. Finally, if $x\sim_r y$ and $y\sim_r z$, then $$d(x,z)\le\max\{d(x,y),d(y,z)\}\le r\;,$$ so $x\sim_r z$, and $\sim_r$ is transitive. By definition, then $\sim_r$ is an equivalence relation. Now $B(x,r)=\{y\in X:d(x,y)\le r\}=\{y\in x:x\sim_r y\}$; that last set is by definition the $\sim_r$-equivalence class of $x$, so for each $x\in X$ the $\sim_r$-equivalence class of $x$ is simply the closed ball $B(x,r)$. It is a general fact about equivalence relations that equivalence classes are either disjoint or identical. In this case that means that for any $x,y\in X$, either $B(x,r)\cap B(y,r)=\varnothing$, or $B(x,r)=B(y,r)$. To put it in slightly different language, the equivalence classes of any equivalence relation form a partition of the underlying set: they divide it into parts in such a way that every point is in exactly one part. Similarly, we can define $$x\sim_{r^-} y\quad\text{ iff }\quad d(x,y)<r\;$$ and show that it too is an equivalence relation on $X$, and that its equivalence classes are the open balls $B(x,r^-)$ for $x\in X$. This is what’s being used in that PDF: since the balls $B(x,r^-)$ for $x\in X$ divide up $X$ in such a way that each point is in exactly one of them, the complement of any one of them is the union of all the others and is therefore open, being a union of open sets. This means that each one of them must be closed.
closed form for $\int_{0}^{\infty}\frac{ \beta(a+ix,a-ix)}{\beta(b+ix,b-ix)}\frac{dx}{(b^2+x^2)}$
The integral is evaluated to be \begin{align} \int_{0}^{\infty} \, \left| \frac{\Gamma(a + i x) }{ \Gamma(b + 1 + ix)} \right|^{2} \, dx = \frac{4^{b-a}}{2 \, b} \, B\left( b - a + \frac{1}{2} , \frac{1}{2} \right) \end{align}
Evaluate Contour Integral
Your parameterization is wrong. You need $z(t)=e^{it}$, $0\le t\le\pi$. Under this parameterization, $y=\sin t$ and the integral becomes $$\int_C5y\,\mathrm dz=5\int_0^\pi\sin t\cdot ie^{it}\,\mathrm dt=5i\int_0^\pi\cos t\sin t+i\sin^2t\,\mathrm dt\\ =\tfrac{5}2\int_0^\pi i\sin2t-1+\cos2t\,\mathrm dt=-\frac{5\pi}2.$$
If $|f(z)|\le |f(z^2)|$ then prove that $f$ is constant
The proof is correct, expect for 2 small points. First, the Maximum Modulus Theorem as stated in Complex Analysis II of Stein and Shakarchi is about a function defined on a open set, such as $\mathbb{D}$, and in $\mathbb{D}$ we still have $z^n \to 0$ as $n \to \infty$ for all $z \in \mathbb{D}$, so there is no need to go to a smaller ball, unless your version is stated for functions from closed balls. Second, the given condition implies $|f(z)| \le |f(z^{(2^n)})| $ instead of $|f(z)| \le |f(z^n)|$ for all $n$.
What does integrating a function $f(x)$ with respect to a function $g(x)$?
Let $u=x^2$ now integrate $(1+u)^\frac{1}{2}du$, after integrating sub back in $u=x^2$. thats it! note: theres no need to find relation between $u=x^2$ as in $\frac{du}{dx}=2x$ shouldnt be substituted in.
Compute Double Sum $\sum_{n,m=1}^{\infty}\frac{(-1)^{n-1}}{n^2+m^2}=\frac{\pi^2}{24}+\frac{\pi \ln(2)}{8}$
Show that $$\lim_{s\to 1^+} (s-1)\sum_{n,m\ne 0,0} |n+im|^{-2s}=\lim_{s\to 1^+} (s-1)\sum_{n,m\ne 0,0} (n^2+m^2)^{-s}$$ $$= \lim_{s\to 1^+} (s-1) \int_{|x|>1,|y|>1} (x^2+y^2)^{-s}dxdy=\pi$$ From $(1-|1+i|^{-2s}) \sum_{n,m\ne 0,0} |n+im|^{-2s}$ $=\sum_{n,m, 2\ \nmid \ |n+im|^2} |n+im|^{-2s}$ $= 2\sum_{n\ne 0,m} |2n+i(2m+1)|^{-2s}$ you'll get for $s >1$ $$ \begin{eqnarray}F(s)&=&\sum_{n\ge 1,m\ge 1} (-1)^{n-1} (n^2+m^2)^{-s}\\ &=&\frac14\left(\sum_{n,m\ne 0,0} (-1)^{n-1} |n+im|^{-2s} -\sum_{n\ne 0}(-1)^{n-1} |n|^{-2s}+\sum_{m\ne 0}|m|^{-2s}\right) \\&=& \frac14(1- (1-2^{-s}) -2^{1-2s}) \sum_{n,m\ne 0,0} |n+im|^{-2s}+ 2^{-2s} \zeta(2s) \end{eqnarray}$$ $\zeta(2)=\pi^2/6$ will give $$\sum_{n\ge 1}\sum_{m\ge 1} (-1)^{n-1}(n^2+m^2)^{-1} =\lim_{s\to 1^+}F(s)=\frac{\pi}8\log 2 + \frac{\pi^2}{24}$$ where I'm using that $\sum_{n\ge 1,m\ge 1} ( (2n-1)^2+m^2)^{-1}- ((2n)^2+m^2)^{-1})$ is absolutely convergent to write the double series as $\lim_{s\to 1^+}F(s)$.
Order of $x$ with $x^p=a$
Since $x^p=a$ and $a$ has order $k$, we have the $k$ elements $1,x^p,x^{2p},\dots,x^{(k-1)p}$ are all distinct, and forms a subgroup of the subgroup generated by $x$. Hence the order of $x$ must be a multiple of $k$. So to show $x$ has order $kp$, it suffices to show $x^k\neq e$. Since $p$ divides $k$, we have $k=p\ell$, $\ell$ is a positive integer $<k$ and thus $x^k=(x^p)^\ell=a^\ell\neq e$. QED.
How to solve a statement with contradiction evidence?
Whe does $(P \rightarrow Q) \land (Q \rightarrow R)\rightarrow (P \rightarrow R)$ evaluate to $0$? $$(P \rightarrow Q) \land (Q \rightarrow R)\rightarrow (P \rightarrow R) \equiv 0 \tag{1}$$ if and only if $$(P \rightarrow Q) \land (Q \rightarrow R) \equiv 1 \tag{2} $$ and $$P \rightarrow R \equiv 0 \tag{3}$$ Now $(2)$ implies $$ P \rightarrow Q \equiv 1 \tag{4}$$ and $$Q \rightarrow R \equiv 1 \tag{5}$$ From $(3)$ we can conclude $$P \equiv 1 \tag{6}$$ $$ R \equiv 0 \tag{7}$$ $(5)$ and $(7)$ gives $$Q \equiv 0 \tag{8}$$ $(4)$ and $(8)$ gives $$P \equiv 0 \tag{9}$$ but $(9)$ contradicts $(6)$. So there are no truth values for $P$, $Q$ and $R$ such that $1$ is valid.
Show $a_n$ is monotone
Prove by induction that $a_n > a_{n-1} > 0$. Hint: $a_{n+1} > a_n \Leftrightarrow 3 - \frac{1}{a_n} > 3 - \frac{1}{a_{n-1} } \Leftrightarrow a_{n} > a_{n-1}$.
A problem in Wedge product of topological spaces
Generally speaking, (particularly if you're new to algebraic topology), a good approach can be we actually write down the maps which give you a homotopy equivalence of spaces. I will give you a few hints to get you going. HINTS: You start by assuming that $(X, x_0) \sim (Y, y_0)$ and $(Z, z_0) \sim (W , w_0)$. This is extremely important. That tells you that you have some maps back and forth which satisfy a certain property to do with their composition being homotopy equivalent to the identity. Keep these maps, and the homotopies which give you your homotopy equivalences, in mind. You can use these maps back and forth to build yourself some maps back and forth on the wedges of spaces $X\vee Z \leftrightarrow Y\vee W$. You then want to show that the composition of these maps is homotopic to the identity on both spaces - the definition of the spaces being homotopy equivalent. To do that, you're going to want to use the homotopies from earlier, which give you that your spaces are homotopy equivalent. The above is the bare bones of what I imagine is the argument you'll want to run. Give it a shot, and then if you want more details please comment and I can try to say a little more.
Relationship between a "standard fact" about limits, Cauchy and d'Alambert
The statement (1) is not correct. For instance, consider $$ a_n=\frac{1}{2^{n+(-1)^n}}. $$ The ratios are $$ \frac{a_{n+1}}{a_n}=\frac{2^{n+(-1)^n}}{2^{(n+1)+(-1)^{n+1}}}=\begin{cases}2 & \text{$n$ even}\\\frac{1}{8} & \text{$n$ odd}\end{cases} $$ while the roots satisfy $\sqrt[n]{a_n}\to\frac{1}{2}$ as $n\to\infty$. In fact, the opposite of (1) is the truth: if $\lvert\frac{a_{n+1}}{a_n}\rvert\to q$ as $n\to\infty$, then $\sqrt[n]{\lvert a_n\rvert}\to q$ as $n\to\infty$ as well. As for your claims about convergence: remember that convergence of the ratios/roots is not sufficient to show convergence in either case; what's required is that the limit be less than 1.
Extend triple of coprime numbers to summation of coprime numbers
Yes. In fact, we can always do this with $x = 1$. We start by just finding an arbitrary solution to the identity. Choose $y_0$ such that $by_0 \equiv -a \pmod c$; this is possible because $b$ has an inverse modulo $c$. Then we have $a + by_0 = cz_0$, and all is right with the world, except that these might not be relatively prime. We can generate an infinite family of these solutions of the form $$a + b(y_0 + ck) = c(z_0 + bk)$$ where $k$ can be any natural number. It suffices to choose $k$ to avoid any common factors between the three terms. (Note that if two of $ax, by, cz$ have a common divisor, the third is divisible by it as well. So it suffices to only check that $\gcd(ax, by) = 1$, which is what we do.) We know that $\gcd(a, y) = 1$ if $y \equiv 1 \pmod a$. So choose $k$ to solve the equation $ck \equiv 1-y_0 \pmod a$; this is possible because $c$ has an inverse modulo $a$. Then $\gcd(a, y_0 + ck) = 1$, so $\gcd(a, b(y_0+ck)) = 1$: we already knew that $\gcd(a,b)=1$. So we've found a solution where the first two terms are relatively prime, and therefore all three terms are pairwise relatively prime.
When is a quotient by closed equivalence relation Hausdorff
$\def\RR{\mathbb{R}}$There is no separation condition which will do the job. That's a vague statement, so here is a precise one: There is a subset of $\mathbb{R}^2$ which (equipped with the subspace topology) does not have condition $\dagger$. Proof: Let $A$ and $B$ be two disjoint dense subsets of $\mathbb{R}$, neither of which contains $0$. (For example, $\mathbb{Q} +\sqrt{2}$ and $\mathbb{Q}+\sqrt{3}$.) Let $$X = (A \times \RR_{\geq 0}) \cup (B \times \RR_{\leq 0}) \cup (\{0\} \times \RR_{\neq 0}) \subset \RR^2.$$ Define $(x_1,y_1)$ and $(x_2, y_2)$ to be equivalent if $x_1=x_2$ and, in the case that $x_1=x_2=0$, that $y_1$ and $y_2$ have the same sign. Verification that this is a closed equivalence relation: $X^2$ is a metric space, so we can check closure on sequence. Let suppose we have a sequence $(x_n, y_n) \sim (x'_n, y'_n)$ with $\lim_{n \to \infty} x_n=x$, $\lim_{n \to \infty} y_n=y$, $\lim_{n \to \infty} x'_n=x'$ and $\lim_{n \to \infty} y'_n=y'$. We must verify that $(x,y) \sim (x',y')$. First of all, we have $x_n = x'_n$, so $x=x'$ and, if $x=x' \neq 0$, we are done. If $x=x'=0$, we must verify that $y$ and $y'$ have the same sign. But $y_n$ and $y'_n$ weakly have the same sign for all $n$, so they can't approach limits with different signs. Verification that $X/{\sim}$ is not Hausdorff: We claim that no pairs of open sets in $X/{\sim}$ separates the images of $(0,1)$ and $(0,-1)$. Suppose such open sets exist, and let $U$ and $V$ be their preimages in $X$. Then there is some $\delta$ such that $(A \cap (-\delta, \delta) )\times \RR_{\geq 0} \subset U$ and $(B \cap (-\delta, \delta) )\times \RR_{\geq 0} \subset B$. Then $U \cap \RR \times \{ 0 \}$ is an open set which contains $(-\delta, \delta)$. By the density of $B$, there must be a point of $B \cap (- \delta, \delta)$ in $U \cap \RR$, and then this gives an intersection between $U$ and $V$.
Manifold which is union of two balls is topologically a sphere
As stated, your question has negative answer for all $n\ge 2$. For instance, you can represent the surface, known as the "open pair of pants" or "triply punctured sphere", as the union of two open disks whose intersection is the disjoint union of three disks. The assumption that you are missing is that $M$ is a closed manifold (compact and without boundary). Of course, Petersen has this assumption, as he is proving the sphere theorem, which is about closed manifolds. In fact, Petersen gives a pretty good argument for why the manifold is a sphere, as he observes that: $\pi_1(M)=1$, $\pi_k(M)=0$ for all $k=2,...,n-1$. From this, one concludes that $M$ is homeomorphic to $S^n$ using the topological h-cobordism theorem; Remove from $M$ two small disjoint balls. The complement $W$ is an h-cobordism between two $n-1$-spheres. Therefore, $W$ is homeomorphic to $S^{n-1}\times [0,1]$. Then, follow the proof that Lee Mosher wrote in his (now deleted) answer. (This is the same proof that Smale has in his solution of the Poincare conjecture.)
Metric spaces, Heine-Cantor and boundness
It is easiest with an open cover and a finite subcover. Given $\varepsilon > 0$, continuity gives you a ball of size $\delta(x,\varepsilon)$ around each $x \in D$. This is an open cover of $D$. Then compactness lets you extract a finite subcover. What does the subcover do for you?
Derivative definition with double limit
$$ \begin{split} \frac{f(x+h)-f(x-k)}{h+k} &= \frac{f(x+h)-f(x)+f(x)-f(x-k)}{h+k} \\ &= \frac{f'(x)h+f'(x)k+o(h)+o(k)}{h+k} \xrightarrow[\substack{h\to0^+\\h\to0^+}]{} f'(x) . \end{split} $$
Matrices, Basis and and subspaces
Part (a) is not hard. Clearly $0_{3\times 3} \in W$ and: if $B,C \in W$, then $B+C \in W$ since $A(B+C) = AB+AC = 0_{3\times 3}+0_{3\times 3}=0_{3\times 3}$; if $B \in W$ and $k \in \mathbb{R}$, then $kB \in W$ since $A(kB) = kAB = k0_{3\times 3}=0_{3\times 3}$. $W$ is thus a linear subspace of the $3 \times 3$-matrices. For (b), following the hint of Omnomnomnom in his comment, note that if $B = \left( B_1 \vert B_2 \vert B_3 \right)$: $$AB = 0_{3\times 3} \iff A\left( B_1 \vert B_2 \vert B_3 \right) = 0_{3\times 3} \iff \left( AB_1 \vert AB_2 \vert AB_3 \right) = 0_{3\times 3}$$ Each of the columns of $B$ is an element of the null space (or kernel) of $A$. Find this null space and use it to fill the columns of $B$. Addendum after comments. You found the null space of $A$: $$\mbox{Null}\,A = \left\{ \begin{pmatrix} 2r \\ -6r \\ r \end{pmatrix} : r \in \mathbb{R} \right\}$$ Now construct $B$ by filling its columns with (in general, different) vectors from $\mbox{Null}\,A$: $$B = \begin{pmatrix} 2r_1 & 2r_2 & 2r_3 \\ -6r_1 & -6r_2 & -6r_3 \\ r_1 & r_2 & r_3 \end{pmatrix}$$ This means $B$, and thus any element of $W$, can be written as: $$r_1\begin{pmatrix} 2 & 0 & 0 \\ -6 & 0 & 0 \\ 1 & 0 & 0 \end{pmatrix}+r_2\begin{pmatrix} 0 & 2 & 0 \\ 0& -6 & 0 \\ 0 & 1 & 0 \end{pmatrix}+r_3\begin{pmatrix} 0& 0 & 2 \\ 0 & 0 & -6 \\ 0 & 0&1\end{pmatrix}$$
Show that if $M$ is a compact submanifold of $\Bbb R^n$ then it doesn't have an atlas with only one chart
If the atlas of $M$ has one chart, $M$ is homeomorphic to an open subset of an Euclidean space which is not compact.
How to do continuous-time Bayesian updating?
Define $G$ and $B$ to be event of being in good and bad state, respectively. Let $I$ be event that no news arrives in the interval $[t, t+dt)$. As you state, we are interested in $\Pr(G|I)$, because if news arrives, then we are sure to be in good state. So, we are interested in evolution of our belief of being in good state as time passes without the arrival of news. Using Baye's theorem $$ \begin{split} \Pr(G|I) &= \frac{\Pr(I|G)\Pr(G)}{\Pr(I|G)\Pr(G)+\Pr(I|B)\Pr(B)} = \frac{(1-\lambda K_tdt)p_t}{(1-\lambda K_tdt)p_t + 1(1-p_t)} \\ &= \frac{(1-\lambda K_tdt)p_t}{1-\lambda p_tK_t dt}. \end{split} $$ The posterior probability after observing the information over time $[t, t+dt)$, be denoted by $p_t + dp_t$. Therefore, $$ p_t + dp_t = \frac{(1-\lambda K_tdt)p_t}{1-\lambda p_tK_t dt} $$ Simplifying it after ignoring $dp_tdt$, we get $$ \frac{dp_t}{dt} = -\lambda p_t(1-p_t)K_t. $$
Example 3, Sec. 28 in Munkres' TOPOLOGY, 2nd ed: How does $S_\Omega$ satisfy the sequence lemma?
Let $a\in S_\Omega$. Then $S_a$ is countable, so $\{a\}\cup S_a$ is countable, and $S_\Omega\setminus(\{a\}\cup S_a)$ is uncountable and therefore non-empty. $S_\Omega$ is well-ordered, so $S_\Omega\setminus(\{a\}\cup S_a)$ has a least element; call it $a^+$. Suppose that $U$ is an open nbhd of $a$ in $S_\Omega$. Case 1. Suppose first that this $a$ is NOT the smallest element of $S_\Omega$. Then by definition there are $b,c\in S_\Omega$ such that $a\in(b,c)\subseteq U$, where $(b,c)=\{x\in S_\Omega:b<x<c\}$. Clearly $a<c$, so $a^+\le c$, and $$a\in(b,a^+)\subseteq(b,c)\subseteq U\;.$$ Thus, $\mathscr{B}_a=\{(b,a^+):b\in S_a\}$ is a local base at $a$, and since $S_a$ is countable, so is $\mathscr{B}_a$. Case 2. Next, suppose that $a$ is the smallest element of $S_\Omega$. Then there exists an element $c \in S_\Omega$ such that $a \in [a, c) \subset U$. Thus we again have $a^+ \leq c$, and so $a \in \left[a, a^+ \right) \subset [a, c) \subset U$. Therefore we have the following as a local base at $a$: $$ \mathscr{B}_a = \left\{ \ \left[ a, a^+ \right) \ \right\}. $$ This shows that $S_\Omega$ is first countable. The desired result now follows from the following lemma. Lemma. If $X$ is a first countable space, $x\in X$, $A\subseteq X$, and $x\in\operatorname{cl}A$, then there is a sequence in $A$ converging to $x$. Sketch of Proof. Let $\mathscr{B}=\{B_n:n\in\Bbb N\}$ be a countable local base at $x$. For each $n\in\Bbb N$ let $U_n=\bigcap_{k\le n}B_k$; $U_n$ is an open nbhd of $x$, so there is a point $x_n\in U_n\cap A$. Now check that $\langle x_n:n\in\Bbb N\rangle$ converges to $x$. $\dashv$
Number of valuation ring of a given field
For a non-negative integer $a$ let $$K=\Bbb{Q}(B), \qquad B=\{ (1+ma)^{1/n}, n\ge 1,\gcd(n,a)=1,m\ge 0\}$$ If $K=Frac(R)$ with $R$ a DVR then the valuation $v$ on $R$ extends the $p$-adic valuation for some $p$. If $p\nmid a$ then $p$ divides some $1+ma$ and $v((1+ma)^{1/n})=v(1+ma)/n$ gives that $v(K)=\Bbb{Q}$ ie. $v$ is not discrete. Whence $p\ |\ a$. With $O_K$ the algebraic integers $\subset K$ then $O_K$ has a lot of prime ideals above $p$, and since $K$ is unramified at $p$ we get that any valuation above $p$ is discrete, ie. infinitely many choices for $v$. To get only one choice for $v$ we need to add more elements to $B$: For each $b\in B$, $f\in \Bbb{Z}[x]$,$n\ge 1,\gcd(n,a)=1$: add $(1+(b-1)f(b))^{1/n}$ to $B$, and then repeat iteratively, choosing $b$ also in the newly added elements. This time only one valuation above $p$ is discrete: the one such that $v(b-1)>0$ for all $b\in B$, which gives a natural embedding $K\to \Bbb{Q}_p$. The other valuations are not discrete because if $v(b-1)=0$ then for some $f\in \Bbb{Z}[x]$ we'll have $v(1+(b-1)f(b))>0$ and when adding the $n$-th roots of $1+(b-1)f(b)$ we'll get a non-discrete valuation.
Distributing candies
With the function $c(m,x,t)$ counting the number of ways of distributing $m$ candies among $x$ people so nobody received more than $t$ (but they could receive zero), the answer would be $$\sum_{m=\max(B,G)}^{N\min(B,G)}\; c(m-B,B,N-1)c(m-G,G,N-1)$$ Added: To calculate $c(m,x,t)$, note that for all $t$ you start with $c(0,0,t)=1$ and $c(m,0,t)=0$ for $m \gt 0$, and then for $x\gt 0$ you can use: $$c(m,x,t)=\sum_{j=0}^{\min(m,t)} c(m-j,x-1,t).$$ E.g. if $t=1$ then you get a table for $c(m,x,t)$ starting x m:0 1 2 3 - --------- 0 1 0 0 0 1 1 1 0 0 2 1 2 1 0 3 1 3 3 1 while if $t=2$ then you get a table for $c(m,x,t)$ starting x m:0 1 2 3 - --------- 0 1 0 0 0 1 1 1 1 0 2 1 2 3 2 3 1 3 6 7 Considering your example of $B=1$, $G=2$, $N=3$ we get $$c(2-1,1,3-1)c(2-2,2,3-1)+c(3-1,1,3-1)c(3-2,2,3-1)$$ $$=c(1,1,2)c(0,2,2)+c(2,1,2)c(1,2,2)$$ $$=1\times 1 + 1\times 2 = 3$$ while considering your example of $B=2$, $G=2$, $N=2$ we get $$c(2-2,2,2-1)c(2-2,2,2-1)+c(3-2,2,2-1)c(3-2,2,2-1)+c(4-2,2,2-1)c(4-2,2,2-1)$$ $$=c(0,2,1)c(0,2,1)+c(1,2,1)c(1,2,1)+c(2,2,1)c(2,2,1)$$ $$=1\times 1 + 2\times 2 + 1\times 1 = 6$$ so this approach reproduces your expected results.
Solution of equations of the form: $a^x+b^x+c=0$
If a is a power of b, or vice-versa, then the transcendental equation can be reduced to a polynomial one. Otherwise its solutions cannot even be expressed in terms of the Lambert W function, so your only hope is to solve it by means of numerical algorithms.
Let $A$ be an integer. Consider two integers $A$ and $A+1.$ Print the sum of the numbers that cannot be formed using any combination of $A$ and $A+1.$
Here is a pattern that I was able to derive. This is not a complete answer and you will have to refine it further. This is for any given positive integer $A$, where all possible numbers $n$ need to be built using $n = iA + j(A+1)$ where $i, j \in Z$ and $i, j \ge 0$ say $k = A - 1$ Numbers that you cannot make using $A, A + 1$ - (1) $ \, 1$ to $A - 1$ (2) $ \,A+2$ to $2A-1$ (3) $ \,2A+3$ to $3A-1$ ... ... (A-2) $ \,(A-3)A + (A-2)$ to $A(A-2) -1$ ($2$ numbers) (A-1) $ \,(A-2)A + (A-1)$ to $A(A-1) -1$ (just $1$ number and the largest number). In other words, we can write it as $S = \sum \limits_{i=1}^k \sum \limits_{j = 1}^{A - i} (i-1)A + (i -1) + j$ EDIT: I tried to simplify and see if it came to the same answer as in Arthur's link - $S = \sum \limits_{i=1}^k (i-1) (A + 1) (A - i) + \frac {(A - i) (A + 1 - i)} {2}$ $S = \frac {1}{2} \sum \limits_{i=1}^k (2A^2 + 2A + 1) i - (2A + 1) i^2 - A (A + 1)$ $S = \frac {1}{2} [\frac {A(A-1)(2A^2 + 2A + 1)}{2} - \frac {A(A-1)(2A-1)(2A+1)}{6} - A (A-1) (A + 1)]$ Simplifying I get $ \, S = \frac {(A-1)(A^3-A)}{6}$ which is same as in the link as there, the numbers are $n + 1, n + 2$ instead of $A, A + 1$.
Question on sum of normal variable
If $X$ and $Y$ are independent standard normal, then your calculation is correct: the random variable $\frac{X+Y}{\sqrt{2}}$ has variance $1$, and is normal. Under the assumption of independence, $X+Y$ is indeed normal with variance $2$. Dividing $X+Y$ by $\sqrt{2}$ divides the variance by $(\sqrt{2})^2$, that is, by $2$, giving variance $1$.
Combining an arbitrary number of integers into one s.th. each can be reconstructed
Continue your solution for $k=2$ in this manner: $$n_1+(m+1)n_2 + (m+1)^2n_3\cdots(m+1)^{k-1}n_k$$
Initial values and step input
$y(t)$ is defined as $c\,x(t)$. So by just multiplying both terms of $x(t)$ by $c$ you get $y(t)$. Remember that $y_0(t)=c\,x_0(t)$. After this you also need to use the substitution for $u(t)$ in the integral, which is two times the step function. Two two is a constant so can be taken outside of the integral. Since the step function always one on the interval of the integral (assuming $t\geq0$), so that term also vanishes since multiplying by one does not change anything. Your last question is related to the pervious question you asked on this site (however I never use those intermediate variables myself, but I am not familiar with you method you are following).
Distance between two symmetric equations
Yes this is correct, although you could equally well have used the point $(2,5,1)$ as the chosen point on line $L_1$ and obtained the same result. The general result is that the distance $d$ between the skew lines $\underline{r}=\underline{a_1}+\lambda\underline{b_1}$ and $\underline{r}=\underline{a_2}+\mu\underline{b_2}$ is given by $$d=\left|(\underline{a_2}-\underline{a_1})\cdot\underline{\hat{n}}\right|, $$ where $\underline{\hat{n}}$ is the unit vector parallel to $$\underline{b_1}\times\underline{b_2}$$
Perturbative solution to an initial-value problem
Using the expansion in (7.1.9) and plugging into (7.1.8) we have $$ \sum \epsilon^n y_n'' = f(x) \sum \epsilon^{n+1} y_n \\ \sum_{n=0}^{\infty} \epsilon^n y_n'' = f(x) \sum_{n=1}^{\infty} \epsilon^n y_{n-1} $$ We assume $\epsilon \ll 1$ and so the leading order is $y_0'' = 0$. Which leaves the higher order solutions $$ \underbrace{y_0''}_{\text{0th order}} + \sum_{n=1}^{\infty} \epsilon^n y_n'' = f(x) \sum_{n=1}^{\infty} \epsilon^n y_{n-1} $$ Which gives (7.1.10) $$ y_n'' = y_{n-1} f(x) $$
Direct sum of subspaces of a vector space.
As a wild guess, I'm guessing that you're asking, "If the $w_i$ have intersection 0, is $V$ the direct sum of them? Or do they need to pairwise intersect in 0?" The answer is "You need pairwise intersection." Think of the $xy$, $yz$, and $xz$ subspaces of $\mathbb R^3$: the vector $(0, 1, 0)$ can be written as a sum of elements of the first and third, or as an element of the second, so $\mathbb R^3$ is not a direct sum of these subspaces, but the intersection of the subspaces is trivial.
Parallelpiped formula induction
If a parallelogram is defined by $\vec{u}$ and $\vec{v}$, then $\vec{u} + \vec{v}$ and $\vec{u} - \vec{v}$ are the two diagonals of the parallelogram. Therefore, the second one shows that the sum of the squares of all four sides of the parallelogram (because a parallelogram has two pairs of equal sides, we have a factor of $2$) is equal to the sum of the squares of its diagonals - a well known theorem. With this in mind, try part (a)!
From brownian bridge to brownian motion proof
Easy way: we know that $B_t$ is a Gaussian process, i.e. for any $t_1, \dots, t_n$ the random vector $(B_{t_1}, \dots, B_{t_n})$ has a jointly Gaussian distribution. Since any linear transformation of a jointly Gaussian vector is again jointly Gaussian, it follows that $W_t$ is also a Gaussian process, and likewise so is $Y_t$. Then it's easy to compute that $Y_t$ has mean 0, and that the covariance of $Y_s$ and $Y_t$ is $s \wedge t$, so $Y_t$ is Brownian motion.
Why is every open in $\mathbb{A}^1$ necessarily principal?
Since $F[X]$ is a PID, and thus the ideal generated by $S$ is generated by a single polynomial $f$, you have that $V(S)=V(f)$, and so $U$ is principal open.
Can a Gaussian integer matrix have an inverse with Gaussian integer entries?
If $A$ is a matrix over any commutative ring such that $A^{-1}$ also has entries in that ring, then from the equation $AA^{-1} = I$ we see that the determinant of $A$ must be a unit. And conversely, if $\det(A)$ is a unit, then $A^{-1}$ will have entries in the given ring (using the formula for adjugate matrix). So in particular, since the units in the ring of Gaussian integers are $\pm 1, \pm i$, it follows that a square matrix has inverse whose entries are also Gaussian integers if and only if its determinant is equal to $\pm 1, \pm i$.
What is the probability that precisely $z$ cars are yellow?
What you want is the hypergeometric distribution. The answer will be $$\frac{\binom{x}{z}\binom{100-x}{y-z}}{\binom{100}{y}}.$$
Formula for sum of divisors
The equation $\, d \!=\! \prod_{i=1}^k p_i^{\mu_i}, \,$ where $\, 0 \!\leq\! \mu_i \!\leq\! m_i \,$ gives us a bijection between divisors $\,d\,$ of $\,n\,$ and tuples $\, \mu_i \,$ that satisfy $\, 0 \!\leq\! \mu_i \!\leq\! m_i. \,$ Thus $\, \sum_{d|n} f(d) \,$ uniquely corresponds to $\, \sum_{0 \!\leq\! \mu_i \!\leq\! m_i} f(\prod p_i^{\mu_i}). \,$
Prove that $\lim_{x\to 1}\left(4+x-3x^{3}\right)=2$.
There is a flaw in your argument that is worth noting. You write "if $|x−1|<\delta$ then we have two inequalities $$3(−δ−1)<−3x<3(δ−1)$$ and $$−3(δ+1)^2<−3x^2<−3(−δ+1)^2."$$ The first of these is correct, but the second is only correct if you've already assumed $\delta\le1$. For example, one way to satisfy $|x-1|\lt\delta$ is to let $x=0$ and $\delta=2$. This turns the second line of inequalities into $$-27\lt0\lt-3$$
CDF and PDF of $Z = aX +bY$ ; $X \sim \exp({\lambda_1})$ and $Y \sim \exp({\lambda_2})$
specifying some details would be appreciated $X$ and $Y$ are independent? $a,b$ are real parameters? or non negative? (this changes a lot the calculation) For the rest you can use the standard CDF's method $$F_Z(z)=\int_{aX+bY \leq z}f_{XY}(x,y)dxdy$$ and $$f_Z(z)=\frac{d}{dz}F$$ Hint for the calculations CDF's Method $$F_Z(z)=\mathbb{P}[Z \leq z]=\mathbb{P}[aX+bY \leq z]=\mathbb{P}[Y \leq \frac{z}{b}-\frac{a}{b}X]$$ This is the drawing Then this is the integral to solve $$F_Z(z)=\int_{0}^{\frac{z}{a}}\lambda e^{-\lambda x}dx\int_{0}^{\frac{z}{b}-\frac{a}{b}x}\theta e^{-\theta y}dy$$ [I changed the exp parameters in $\lambda$ and $\theta$ to simplify the notation] Other methods: Use the Fundamental Transformation Theorem (Jacobian Method) Considering that if $X\sim exp(\theta)$ then $aX\sim exp(\frac{\theta}{a})$ you can modify the parameters of your marginal distribution and calculate the density of the sum immediately by convolution The first method is useful to improve your brainstorming
Is Weyl's second form of the algebraic definition of the derivative of a polynomial complete?
No. The point is that the error terms are present in $g(x,y)$ but disappear when you compute $g(x,x)$. Writing $y=x+t$, the second equation becomes $$f(x+t)-f(x)=tg(x,x+t).$$ In other words, $tg(x,x+t)$ is what you call $tf'(x)+t^2\Delta$, so $$g(x,x+t)=f'(x)+t\Delta.$$ To evaluate $g(x,x)$, you plug in $t=0$ and get $$g(x,x)=f'(x).$$
Understanding the second condition of the pumping lemma
You are absolutely correct. The proof is badly written. The two thirds of the first case that you have marked and the second case are indeed impossible and therefore do not need to be treated, but just confuse the reader. Consequently, we see that already the language $\{0^n1^n: n\geq 0\}$ is non-regular, because exactly the same argument applies. Maybe the authors have taken a proof via the pumping lemma for context-free languages that $\{0^n1^n2^n: n\geq 0\}$ is not context-free (while $\{0^n1^n: n\geq 0\}$ is), and have deleted the parts for the second pumping factor. This would lead pretty much to what you have posted here, I think.
Bayesian statistics, bivariate prior distribution
Your prior is just a Normal-gamma distribution. The likelihood is a standard normal likelihood, but obviously taken as the product of the $n$ conditionally independent observations (with the terms in the exponent rewritten by completing the square). The prior is conjugate, so the posterior distribution is also a Normal-gamma distribution. I would advise you to work out & slog through the proportional form & posterior parameters through $$ \text{Posterior} \propto \text{Prior} \times \text{Likelihood}, $$ but if you want to verify your answer you can here (under Normal likelihood with unkown but exchangeable mean and precision).
Lake with capacity of 1000 fish. How long will it take to reach 900?
you can show that$$N = \frac{1000}{1 + ce^{-0.2t}} $$ solves $$\frac{dN}{dt}= 0.2N \left( 1-\frac N{1000}\right)$$ for arbitrary $c.$ that $N = 20$ at $t = 0$ fixes $c = 49.$ now solve $$ 900= \frac{1000}{1 + 49e^{-0.2t}}$$ for $t.$
Mean Curvature Flow equation, where does it come from?
To explain the notation, $F$ is a vector valued function $F = (F^1, \cdots F^{n+1})$ and $$\Delta_t = (\Delta_t F^1 , \cdots , \Delta_t F^{n+1}\ ).$$ Now if $(x^1, \cdots, x^n)$ is a local coordinate on $M$ with induced metric $g_{ij}$ (depending on $t$), then for any function $f: M \to \mathbb R$, $$\Delta f = g^{ij} \nabla^2_{ij} f =g^{ij} \left(\frac{\partial f}{\partial x^i \partial x^j} - \Gamma_{ij}^k \frac{\partial f}{\partial x^k}\right).$$ Note that $$\nabla_i \frac{\partial F}{\partial x^j} = \Gamma_{ij}^k \frac{\partial F}{\partial x^k}$$ Thus \begin{split} \Delta F &= g^{ij} \left(\frac{\partial ^2F}{\partial x^i \partial x^j} - \Gamma_{ij}^k \frac{\partial F}{\partial x^k}\right) \\ &= g^{ij} \left(\frac{\partial ^2F}{\partial x^i \partial x^j} - \nabla_i \frac{\partial F}{\partial x^j} \right) \\ &= g^{ij}\left( \frac{\partial ^2F}{\partial x^i \partial x^j}\right)^\perp \\ &= \vec H = -H \vec v. \end{split}
Confusion on the definition of $G_{\delta}$ sets
Countable intersection of open sets is not open. And a $G_{\delta}$ is a countable intersection of open sets. Any open subset $U$ of a topolgical space $X$ is $G_{\delta}$ because you can consider $U=\bigcap_{i=0}^{\infty} X_i$ where $X_0=U$ and $X_i=X$ for all $i>0$.
Homotopy of a (non-spherical) cow.
The genus of the object is 3. Let me explain: If you take a sphere (or any other surface) and mark two points, which you then join with an arc that meets the sphere only at those two points, you can "inflate" that arc to make a torus. (More precisely, you can do surgery: remove a disk around each point in S^2; add a cylinder that runs along the arc, and sew things together along the matching circles). Since the surface remains closed after surgery, you can compute the euler characteristic (2 - 2g) by triangulating things: make the two removed disks be triangles, and the inserted tube be a triangulated tube. In removing the disk, you lose two faces; in adding the tube, you get 6 new faces, 6 new edges. Total: a loss of two faces, so the Euler characteristic drops by two, so the genus goes up by one. Note that this analysis doesn't depend on the surface being a sphere: if you add a tube to any surface via surgery, the genus goes up by one, regardless of where the two endpoints happen to be. OK. let's model the cow as a sphere... [insert joke about mathematicians here] ...to which I'll add an alimentary tract, by joining the east and west poles of the sphere (you know what I mean!). Now it is, as you observed, a torus. Alternatively, the argument above shows that the genus increased by 1. Now I'm going to add a nose with a septum, by connecting Boston and Worcester with a small arc inside the sphere, and doing surgery. (Alternatively: think of a big tunnel dug from Boston to Worcester.) Another increase in the genus. Finally, I'm going to connect that "path around the septum" (the empty space inside the nose, i.e., the wall of the tunnel) to the alimentary tract with yet another path. (Another increase in genus). I've now surgered in 3 arcs, so the result is a sphere with 3 handles, i.e., genus 3.
Compare the integrals $I=\int_{0}^{1}2^{x^2}dx\;,J=\int_{0}^{1}2^{x^3}dx\;,K=\int_{1}^{2}2^{x^2}dx\;,L=\int_{1}^{2}2^{x^3}dx$
$ K> I $ because $2^{x^2}$ on $[1, 2] \geq 2^{x^2}$ on $[0, 1] $
Determining the area inscribed into square
Hint: The vertices of $T$ are on the sides of $S$, so the least possible value for the diagonal of $T$ is $10$ (the length of the sides of $S$).
Proof of that in an integral domain, every prime element is irreducible.
You don't need to examine the case $p\neq ab$. In order to prove every prime element $p$ is irreducible you have to show IF $p=ab$, then $a$ or $b$ is an unit (see the definition of irreducible element). However, if we have $p\neq ab$? It doesn't matter, we don't care, what matters to us is just the case whenever $p=ab$.
How to construct rings with a given class number?
Yes, for every finite abelian group one can construct a Dedekind domain with class group isomorphic to that group. This is a result from the 1960s due to Claborn. For details see for example: Pete L. Clark: Elliptic Dedekind domains revisited. Enseignement Math. 55 (2009), 213-225.
Formality of Treating Differential Operators as a Variable in Solving Diff Eqs.
Question: Why is Differential Operators allowed to be treated in these way? Also, what particular subject or topic can I read to understand things like these? Answer: Your question is like asking about fourier or Laplace transform. Let me discuss using Laplace transform: Define $L(f(x)) = \int f(x) e^{-sx} dt$. It turns out u can recover $f(x)$ from $L(f(x))$. Let Laplace transform of $y=f(x)$ be written as $L(y)$. A Property of $L$: $L(\frac{dy}{dx}) = sL(y)$ Hence the Differential equation can be written as $P(s) L(y) = L(e^x)$ where $P(s)$ is a polynomial in $s$. Hence $y = L^{-1}(\frac{L(e^x)}{P(s)})$. Another Property of $L$: $L^{-1} (Y_1(s) Y_2(s)) = L^{-1}(Y_1(s)) * L^{-1}(Y_2(s))$ where $*$ is convolution. We have : $y = e^x * L^{-1}(\frac{1}{P(s)})$. Let $L^{-1}(\frac{1}{P(s)})=p_1(x)$ $y = e^x * p_1 =\int e^{(x-t)} p_1(t) dt = e^x L(p_1)|_{s=1} = e^x (1/(P(s))|_{s=1}$. This is exactly what the book suggests. You can see that the idea works only if the right hand side is $e^x$. Hope its clear. Now coming to ur first question. let $L^{-1}(P(s)) = p(x)$. We need $\frac{1}{x}\frac{dP(s)}{ds} = \frac{1}{x} \int \frac{d(p(x)e^{-sx})}{ds} dx = -\int p(x)e^{sx} dx = -P(s)$
Find the infimum of a set
To show that the infimum of the above set is zero when ever $x$ is a positive irrational number, Suppose that the $\inf A$ is equal to $a > 0$, the every element of $A$ should be a integer multiple of $a$, otherwise let $a_1 < a_2$ and $ a < a_1 , a_2 \leq 2a$ then $a_2 - a_1 \leq a$ and it is also a member of the set $A$. But this contradicts the fact the $a$ is the infimum of $A$. Thus, $a_2 - a_1 =0$ So $a$ should belong to $A$. Now let $b$ an other element of $A$, then $b - k \times a \leq a$, where $k$ is the floor of $\frac{b}{a}$. So every element of $A$ is a integer multiple of $a$ and that means that $x$ is a rational number. Contradiction.
How to find the area of a semicircle inside of another semicircle?
N.B. $\operatorname{cs}(ACD)$ is the circle segment defined by the rays, $AC$ and $AD$. In the piture below, we have that $$\operatorname{cs}(ACD)+\operatorname{cs}(BCD)$$ contains the part were looking for. However we have overcounted. Can you see by how much we overcounted (i.e. what we now need to substract)?
Prob. 1, Sec. 25, in Munkres' TOPOLOGY, 2nd ed: The components and the path components of $\mathbb{R}$ with lower limit topology
Your solution is clear and accurate. Well-done! This may just be a typo on your part, and if it is then I apologise for nitpicking, but the sentence Now let $A$ be a set in $\mathbb{R}_l$ consisting of more than one points. should instead read . . . more than one point. Apart from this, I have no comments for improvement.
Are the sets below $\sigma$-algebra?
Does the following set belong to the given collection? $$ \bigcup_{n\ge 2}[n^{-1}, 1-n^{-1}] $$ Look, for example, at this question.
Verifying my proof: "the equation $2x - 6y = 3$ has no integer solution to $x$ and $y$"
Rewind to the point where you say $x=3y+3/2$. We rearrange this to $x-3y=3/2$, then note that since we have taken $x$ and $y$ to be integers, $x-3y$ is also an integer. But $3/2$ is not an integer, a contradiction.
norm of a quadratic form
It is, indeed. You can state the problem as that of maximizing the quadratic form $x^TAx$ subject to the constraint $x^Tx=1$. It is well known that the maximum you seek is exactly the maximum eigenvalue of $A$ and the $x$ attaining this maximum is the eigenvector associated to this eigenvalue. Thus, the quantity you are asking about is just the maximum eigenvalue's norm. More generally, given $A$ symmetric (no loss of generality here) and $B$ positive definite, you can prove that $$max \{x^TAx\;\;\colon\;\; x^TBx=1\}$$ is the maximum eigenvalue of $B^{-1}A$ at $x$ the corresponding eigenvector.
Vertical motion of a particle on a sphere resting against a wall.
Once you know the take-off point and the velocity vector there, you can forget about the sphere: as the hint says, after that you have free projectile motion. The $x$ component of velocity is constant, so you can figure out the time until you hit the wall. Then you find the $y$ value for that time.
Matlab FFT-algorithm example, one simple question
I think it is a normalization. They halved the number of sample points they used, so they doubled the amplitude to keep the "integral" normalized.
What does it mean for a solution to a Linear DE to be homogeneous?
An example of a homogeneous equation is your $$u'' -u'+4u=0.\tag{H}$$ It is a very special homogeneous equation, since it has constant coefficients. Consider the inhomogeneous equation $$u''-u'+4u=e^t.\tag{I}$$ The article discusses, among other things, the fact that we can find all solutions of the inhomogeneous equation (I) by a) finding the general solution of the homogeneous equation (H), b) finding a single particular solution of the inhomogeneous equation (I), and c) adding the solutions found in a) and b). Then the article focuses on finding the general solution of equations like (H), and shows that a linear combination of solutions of a homogeneous equation is always a solution of the equation. These are solutions of the homogeneous equation, and not "homogeneous solutions." In the context of linear differential equations, homogeneous solution has no meaning.
Proving Is mth root of 2 an irrational number for every integer $m\ge 2$?
It's nearly the same: $\sqrt[m]{2}=\frac ab\\\implies2=\frac{a^m}{b^m}\\\implies2b^m=a^m\\\implies a=2k\\\implies2b^m=(2k)^m\ \leftarrow\text{ you got to this step}\\\implies2b^m=2^mk^m\\\implies b^m=2^{m-1}k^m\\\implies b=2n$ Hence, contradiction, since $a$ and $b$ must be coprime.
Connectivity vs clique number of a graph
Counter example: Let $G=K_{n,n,\ldots,n}$ be a $t$-complete partite graph. Then $$\omega(K_{n,n,\ldots,n})= t$$ and $$\kappa(K_{n,n,\ldots,n})= n$$ If there exist function $f$ such that $f(\kappa(G)) \leq n $, then $$f(t)\leq n$$ For $n=1$ $$f(t)\leq 1$$ Which is trivial result. Thus there does not exist $1< f(\kappa(G))$, such that $f(\kappa(G))\leq \omega(G)$
Finding the Closest Permutation Matrix Given Two Vectors
For the first question: If $\vec a=(a_1,\ldots,a_n)$ and $\vec b=(b_1,\ldots,b_n)$, we need to find the image of every basis vector $\vec e_i$. For $\vec e_1$, you can pick any $j$ with $b_j=a_1$ and then map $\vec e_1\mapsto \vec e_j$ (i.e., $M\vec e_1=\vec e_j$, i.e., the $(j,1)$ entry of $M$ is $1$, or the value of the desired permutation $\pi$ evaluated at $1$ is $\pi(1)=j$). Continue with this method, but avoid repetition, i.e., when looking for $M\vec e_i$, pick any $j$ with $b_j=a_i$, but only among those indices you haven't picked yet. For the second question: We mainly have to decide which value in $\vec a$ to associate with which value in $\vec b$. And then, in case of repeated values on either side, their corresponding indices may be permuted at will. First let us find one such association that is optimal. Claim. If permutation $\pi$ minimizes $f(\pi):=\sum_i(a_i-b_{\pi(i)})^2$, then there exists a permutation $\sigma$ such that $$\tag1a_{\sigma(1)}\le a_{\sigma(2)}\le\ldots\le a_{\sigma(n)}$$ and $$\tag2b_{\pi(\sigma(1))}\le b_{\pi(\sigma(2))}\le\ldots\le a_{\pi(\sigma(n))}$$ Proof. There are certainly sorting permutaions $\sigma$ that guarantee $(1)$. But for each such $\sigma$, there may exist indices $\ell$ with $b_{\pi(\sigma(\ell))}>b_{\pi(\sigma(\ell+1))}$ (whereas of course $a_{\sigma(\ell)}\ge a_{\sigma(\ell+1)}$). Among all $\sigma$ with $(1)$, consider one that minimzes the number of such bad indices $\ell$. If no suc $\ell $ exists, w are done. So assume otherwise. Let $j=\sigma(\ell)$, $k=\sigma(\ell+1)$, $\pi'=\pi\circ (j\;k)$. Then $$\begin{align} f(\pi')-f(\pi)&=(a_j-b_{\pi'(j)})^2+(a_k-b_{\pi'(k)})^2-(a_j-b_{\pi(j)})^2-(a_k-b_{\pi(k)})^2\\ &=(a_j-b_{\pi(k)})^2+(a_k-b_{\pi(j)})^2-(a_j-b_{\pi(j)})^2-(a_k-b_{\pi(k)})^2\\ &=-2a_jb_{\pi(k)}-2a_kb_{\pi(j)}+2a_jb_{\pi(j)}+2a_kb_{\pi(k)}\\ &=2(a_j-a_k)(b_{\pi(j)}-b_{\pi(k)})\\ &\le 0\end{align}$$ with equality iff $a_j=a_k$. By minimality of $f(\pi)$, we conclude that $a_j=a_k$. But them $\sigma':=(j\;k)\circ \sigma$ is a permutation that sorts $\vec a$ and has one less bad index than $\sigma$, contradicting our choice of $\sigma$. Hence the claim follows. $\square$ Now to find all optimal permutatons, you may permute arbitrarily among indices where components of $\vec a$ are the same, or similarly for components of $\vec b$. Note that considering only permutations among the components of $\vec a$ or only permutations among the components of $\vec b$ will not necessarily give you all optimal solutions. On the other hand, permuting both with $ßvec a$ and within $\vec b$ may count some solutions repeatedly, namely when ranges of equal value "overlap" in the sorted order of the components of $\vec a$ resp $\vec b$.
Triangle $ABC$ $XY$ parallel to $BC$
Actually this problem is immediately solved by Ceva's Theorem and Thales. Let $E=CX \cap BY$ and $F= AE \cap BC$. Then $$\frac{AX\cdot BF\cdot CY}{XB\cdot FC\cdot YA}=1$$ Also due $BC\parallel XY$ we have $$\frac{AX}{XB}=\frac{AY}{YC}$$ This implies $BF=FC$.
show that the set $\{f \in C([0,1], \Bbb R) \mid \int_0^1 f(x)\,dx \in (0,3) \}$ is an open set
The application $$ \Gamma : C([0,1],\mathbb{R}) \to \mathbb{R} \\ f \mapsto\int_0^1f(t)\,dt $$ is continuous (even more so, it is Lipschitz) with respect to the $d_\infty$ metric: $$ \big|\Gamma(f)-\Gamma(g)\big| = \left|\int_0^1(f-g)(t)\,dt\right| \leq \int_0^1|(f-g)(t)|\,dt \leq (1-0)d_\infty(f,g) $$ Equivalently, preimages of open sets of $\mathbb{R}$ via $\Gamma$ will be open. It suffices then to note that your set is exactly $\Gamma^{-1}(0,3)$.
True/false : The Space of all continiuos real valued functions with compact support with supnorm metric is complete .
Hints: Each $f_n$ has compact support since $x^{2}>n-1$ implies $f_n(x)=0$, Verify that $f_n(x) \to f(x) \equiv \frac 1 {1+x^{2}}$ uniformly on $\mathbb R$. Conclude that $\{f_n\}$ is Cauchy in the given space. Suppose it converges to some $g$ in the given space. Then $f_n \to f$ and $f_n \to g$ pointwise. Hence $f(x)=g(x)$ for all $x$. But $g$ has compact support and $f$ doesn't. This completes the proof.
find the minimum of the value $|a_{1}+a_{2}+\cdots+a_{11}|$
It's 1, according to my short Python 3 script which did a random search, not a brute force search. However, after 1 million runs in a few seconds, it found 1 as the minimum. minimum_value = 10000 for i in range(1000000): moves = np.random.choice([0, 1], 11) value = 0 for move in moves: if value == 0: value = -1 * value - 1 else: value += 1 absolute_value = np.abs(value) if absolute_value < minimum_value: minimum_value = absolute_value print (minimum_value) >> 1
Hint for Lebesgue theory/functional analysis type of problem
You are dealing with the convolution of $f$ with $$ \phi_n(x) = \frac{1}{n}\chi_{[-n,0]}(x). $$ That is, $$ (f\star\phi_n)(x)=\int_{-\infty}^{\infty}f(t)\phi_n(x-t)dt=\frac{1}{n}\int_{x}^{x+n}f(t)dt = f_n(x) $$ Therefore $\|f_n\|_1 \le \|f\|_1\|\phi_n\|_1 = \|f\|_1$. If $g \in \mathcal{C}_c^{\infty}(\mathbb{R})$ (i.e., compactly supported, infinitely differentiable,) then $$ \begin{align} \|f_n-f\|_1 & \le \|f\star\phi_n-g\star\phi_n\|_1+\|g\star\phi_n-g\|_1+\|g-f\|_1 \\ & \le \|f-g\|_1+\|g\star\phi_n-g\|_1+\|g-f\|_1 \\ & = 2\|f-g\|_1+\|g\star\phi_n-g\|_1 \end{align} $$ First choose $g\in\mathcal{C}_c^{\infty}(\mathbb{R})$ so that $\|f-g\|_1 < \epsilon/3$ and then choose $n$ so that $\|g\star\phi_n-g\|_1 < \epsilon/3$.
Prove that $\int_0^x \ 1/\sqrt{1-x^2} dx$ is equal to length of unit circle arc?
Let $\vec r=\hat xx+\hat yy$ be the vector that traces the unit circle. A differential vector segment is given by $d\vec r=\hat xdx+\hat ydy$ and the magnitude of this vector $|d\vec r|=\sqrt{(dx)^2+(dy)^2}$ is the length of the differential segment. We also have from $x^2+y^2=1$ that $\frac{dy}{dx}=-x/y$. Thus, $$|d\vec r| =\sqrt{1+(-x/y)^2} dx=\frac{dx}{\sqrt{1-x^2}} $$ on the upper unit circle. If we want the arc length along the upper unit circle, we simply integrate this this differential.
Galois Group of $x^{12}+x^{11}+\dots+x^2+x+1$
Hints: $$x^{12} + \ldots + x + 1 = \frac{x^{13} - 1}{x - 1}$$ The galois groups of $\,x^p-1\;$ over the rationals is isomorphic with $\,\left(\Bbb Z/p\Bbb Z\right)^*\;$ and thus has order equal to $\,\phi(p)=p-1\;$
Showing that a certain inequality holds for all $ x \in \mathbb{R} $ and $ n \in \mathbb{N} $.
When $x \geq 0$, we have $(x+1)^{2n+1}-x^{2n+1}=(1+(2n+1)x+ \ldots +(2n+1)x^{2n}) \geq 1 \geq 2 \left(\frac{1}{2}\right)^{2n+1}$. When $x \leq -1$, take $y=-(x+1) \geq 0$, so $(x+1)^{2n+1}-x^{2n+1}=(y+1)^{2n+1}-y^{2n+1} \geq 2 \left(\frac{1}{2}\right)^{2n+1}$ by above. When $-1<x<0$, take $y=-x, 0<y<1$, so $(x+1)^{2n+1}-x^{2n+1}=y^{2n+1}+(1-y)^{2n+1} \geq 2\left(\frac{y+(1-y)}{2}\right)^{2n+1}=2 \left(\frac{1}{2}\right)^{2n+1}$ by Power mean inequality.
About the definition of an integral element in commutative rings
This may be more an extended comment than a full answer, but maybe it helps. First, consider the case $R=\Bbb Z$ and $S=\Bbb Z[\frac13]$. The extension should not be considered "integral" because, well, you added a fraction. And indeed, the minimal polynomial of $\tfrac13$ would be $3X-1\in\Bbb Z[X]$. As you can see, it is not monic. I honestly feel that this example actually says it all: If you do not require the polynomial relations to be monic, then you will be adding something that somehow requires dividing by an element of $R$ which was not invertible. And that would simply not be "integral". You are free to add roots and stuff like that, but not invert. If you have some context of algebraic geometry, the following might help, but otherwise feel free to ignore it as it will only be confusing: From a geometric perspective, it means that you are allowed to bend, but you are not allowed to cut: The integral ring extension $\Bbb C[y]\subseteq \Bbb C[x,y]/(x^2-y)$ corresponds to projecting from a parabola to a line (the line was bent), while the extension $\Bbb C[y]\subseteq \Bbb C[y,y^{-1}]$ corresponds to the inclusion $\Bbb C^\times\subseteq \Bbb C$ (the origin was removed).
Truth Table and Valid Arguments given a Statement
If you want to prove that a statement is a a tautology, you need a full truth table showing that every combination of truth values yields a true statement. If you want to show that a statement is a valid argument, you need to show that true premises give a true statement.
Is there a technique to exactly calculate the Hausdorff dimension of the border of this fractal?
$\DeclareMathOperator\unit{unit}$Let's start with the dimension $\Delta$ of the area, which we already know to be $\Delta=2$. Let each of the colored squares have an area of $1\text{ unit}^\Delta$. Then the area of the template is: $$A_0=9\cdot\unit^\Delta$$ Now let's get more fine grained measuring equipment that is $\eta$ times more precise. That is, we will be measuring in $(\frac 1\eta\cdot\unit)^\Delta$ instead of $\unit^\Delta$. As it is we will find $9$ smaller squares inside each of the $9$ squares, so the total area is: $$A_1 = 9^2\cdot\left(\frac 1\eta\cdot\unit\right)^\Delta=9^2\cdot\eta^{-\Delta}\cdot\unit^\Delta$$ With the correct dimension $\Delta=2$, we must have $A=A_0=A_1$, and therefore: $$9=9^2\cdot\eta^{-\Delta}\quad\Rightarrow\quad \eta^\Delta=9\quad\Rightarrow\quad \eta^2=9\quad\Rightarrow\quad\eta=3$$ We measure on a $3$ times smaller scale in $1$ iteration. Let $D$ be the dimension of the circumference. The circumference with the same $\unit$ is: $$C_0 = 20\cdot\unit^D$$ When we measure again with $\frac 1\eta\cdot\unit$ we find: $$C_1 = 112\cdot\left(\frac 1\eta\cdot\unit\right)^D = 112\cdot\eta^{-D}\cdot\unit^D$$ We get with $C=C_0=C_1$: $$20=112\cdot\eta^{-D}= 112\cdot 3^{-D}\quad\Rightarrow\quad 3^D=\frac{112}{20}\quad\Rightarrow\quad D=\log_3 \frac{112}{20}\approx 1.568$$ As expected this is a bit higher than Koch's curve that has $D=1.268$ and lower than $2$.
Left adjoints preserve initial objects $\implies$ right adjoints preserve terminal objects
This follows from duality. In general right adjoints preserve limits, and so by duality left adjoints preserve colimits. The fact that colimits are the dual of limits (and the initial object is the dual of the terminal object) should be clear. So why is left adjoint the dual of right adjoint? Well, suppose we have functors $F: \mathcal{C} \to \mathcal{D}$ and $G: \mathcal{D} \to \mathcal{C}$, with for all objects $C$ in $\mathcal{C}$ and $D$ in $\mathcal{D}$: $$ \mathcal{D}(F(C), D) \cong \mathcal{C}(C, G(D)). $$ That is, $F$ is left adjoint to $G$. Then taking the dual everywhere, this is precisely the same as $$ \mathcal{D^\mathrm{op}}(D, F^\mathrm{op}(C)) \cong \mathcal{C^\mathrm{op}}(G^\mathrm{op}(D), C). $$ So indeed we see that $F^\mathrm{op}$ is right adjoint to $G^\mathrm{op}$. So to sum up, colimits in $\mathcal{C}$ are the same as limits in $\mathcal{C}^\mathrm{op}$. If $F: \mathcal{C} \to \mathcal{D}$ is a left adjoint, then $F^\mathrm{op}: \mathcal{C}^\mathrm{op} \to \mathcal{D}^\mathrm{op}$ is a right adjoint. So since right adjoints preserve limits, $F^\mathrm{op}$ preserves limits and thus $F$ preserves colimits.
Lifting a map to the total space of a circle bundle
There are a few issues I'd like to address. First, is a distinction that must be made between a circle bundle and a principal circle bundle. What you described is a circle bundle: a fiber bundle where the fiber happens to be diffeomorphic to $S^1$. In contrast, a principal circle bundle is a circle bundle for which there is a consistent choice of free,transitive $S^1$ action itself. There are circle bundles which are not principal; the simplest example is probably the Klein bottle as an $S^1$ bundle over $S^1$. A principal bundle over a base $B$ gives rise to a distinguished element $e$ (called the Euler class) which lives in $H^2(B;\mathbb{Z})$. The condition on lifting is not that $f^\ast$ be the zero-map, but rather, that $f^\ast(e) = 0$. Tsemo gave a fine proof that this condition is sufficient; the necessity follows from the fact that the kernel of the map $H^2(B;\mathbb{Z})\rightarrow H^2(P;\mathbb{Z})$ is generated by $e$. But I want to stress that the argument involving the Euler class does not work for non-principal bundles (because the don't have an Euler class). And, indeed, the result is false in this case. To see this, consider the unit tangent bundle of $\mathbb{R}P^2$, $T^1\mathbb{R}P^2$. This is diffeomorphic to the lens space $L(4,1)$, as argued in Proposition 4.2 of this paper. Now, consider the natural projection $\pi:S^2\rightarrow \mathbb{R}P^2$. Because $H^2(\mathbb{R}P^2;\mathbb{Z})$ is torsion, while $H^2(S^2;\mathbb{Z})$ is not, the induced map on $H^2$ must be the trivial map. On the other hand, $\pi$ induces an isomorphism $\pi_2(S^2)\rightarrow \pi_2(\mathbb{R}P^2)$. Because lens spaces are covered by $S^3$, they have trivial $\pi_2$, so the map $\pi:S^2\rightarrow \mathbb{R}P^2$ cannot lift to a map into $L(4,1)$.
Holomorphic function on simply connected set with large derivative is injective?
No, it is false even with convex hypothesis on domain : take $f(z)=e^z$ on $-1\lt \Re(z)\lt 1 $.
Differential of Left and Right Translation Map, and Adjoint Map on $GL(n,\mathbb{R})$
If you want to compute $(DL_A)_B(X)$, you want to consider the action on a curve $\gamma(t)$ through $B$ with tangent vector $X$. Modifying your idea slightly, take the curve to be $\gamma(t)=B\exp(tB^{-1}X)$. That is, you want $$\frac d{dt}\Big|_{t=0} L_A(\gamma(t)) = \frac d{dt}\Big|_{t=0} AB\exp(tB^{-1}X)=AX.$$
Equality of tw0 arclengths
HINT: $$sin(2t)cos(t) = \frac{1}{2}(sin(3t)+sin(t))$$ $$sin(2t)sin(t) = \frac{1}{2}(cos(t)-cos(3t))$$
How to express this quotient in another way?
Let the expression be $$ - \frac{a}{c} + \frac{a+b}{c+d} - \frac{b}{d}$$
Number of ways to win chocolate game
Hint: The solution to nim is based on the binary representation of the pile sizes (or in your case the numbers of chocolates in the various containers). If we align the various binary numbers so that place values are aligned in columns, Alice wants to leave a position for Bob where each column has an even number of $1$s in it. A move by Alice (or Bob) consists of changing some of the bits of one of the numbers with the condition that the leftmost bit change must be from $1$ to $0$ (this is because we must remove chocolates). So she has to locate the leftmost column with an odd number of $1$s (if there is no such column, Alice has no winning moves). Then Alice must make sure that her change gets that column to an even number of $1$s (and possibly other adjustments will need to be made in other further to the right columns). But in that leftmost column with an odd number of $1$s, a $1$ must change to a $0$.
Explain in words why $2^{W_t}$ is not a martingale
$\mathbb{E}[2^{W_t}|W_t \ge 0] > 1$ for $t>0$, therefore it grows on average, hence not a martingale. We used the MGF of the $N(0,t)$ evaluated at $s = \ln(2)$ to compute the moment.
How to show that $\lim_{x \to 0} x^p (\ln x)^r = 0$
First let us examine the following. Given \begin{equation} l > 0 \end{equation} we prove that \begin{equation} \lim_{x \to0} ln(x)x^l = 0. \end{equation} Doing this comes straight from L'Hospitals' rule. We put the limit in the indeterminate form and get the following. \begin{equation} \lim_{x \to \infty}\frac{ln(1/x)}{x^l}= \lim_{x \to \infty}\frac{-1}{x} *\frac{1}{lx^{l-1}}=\lim_{x \to \infty} \frac{-1}{lx^{l}} = 0 \end{equation} Since \begin{equation} l>0 \end{equation} We now have a very general statement. From here you can see that if you substitute l for p/r (p and r >0), you can put our new found limit to the power of r and still get 0 as your answer. More importantly, by distributivity, we get the wanted limit like so. \begin{equation} 0=\lim_{x \to 0}(x^{\frac{p}{r}}ln(x))^r =\lim_{x→0}x^p(ln(x))^r \end{equation} Hope this helps.
Basic question about real-analytic functions
A counterexample is $$ f(x) = xe^x = \sum_{n=1}^\infty \frac{x^n}{(n-1)!} $$ where $$ n! a_n = \frac{n!}{(n-1)!} = n $$ is unbounded. Another example is $$ g(x) = e^{x^2} = \sum_{n=0}^\infty \frac{x^{2n}}{n!} $$ where for the even-indexed coefficients $$ (2n)!a_{2n} = \frac{(2n)!}{n!} = (n+1)(n+2)\ldots(2n) $$ grows faster than any polynomial. What you can say about the growth is that $\lim_{n\to\infty} \sqrt[n]{|a_n|} = 0$ if $\sum_{n=0}^\infty a_n x^n $ is convergent for all $x \in \Bbb R$, this follows from the formula for the radius of convergence of a power series.
How many ways to arrange $20$ items on $4$ towers
For [a], what matters is what tower each ring is put on. The first ring can go on each of the four towers, so can the second, and so on. So the result is $4^{20}$. For [b], you can just order all $20$ rings in a single row, then put the first five on the first stand, rings number 6 through 10 on the second stand, and so on. So the answer is $20!$ For [c], it's a stars and bars solution, only the internal order of the stars and of the bars matter. The answer then comes out to be $23!$
Cauchy sequence is not a topological notion
On $(0,\infty)$, consider the usual distance $d_1$ ($d_1(x,y)=|x-y|$) and the distance $d_2$ defined by $d_2(x,y)=|\log(x)-\log(y)|$. These distances are equivalent (that is, they induce the same open sets), and therefore a sequence of elements of $(0,\infty)$ converges in $\bigl((0,\infty),d_1\bigr)$ if and only if it converges in $\bigl((0,\infty),d_2\bigr)$. However, the sequence $\left(\frac1n\right)_{n\in\Bbb N}$ is a Cauchy sequence in $\bigl((0,\infty),d_1\bigr)$, but not in $\bigl((0,\infty),d_2\bigr)$. So, the knowledge of the open subsets of a metric space $(X,d)$ is not enough to know its Cauchy sequences.
Is the proof of the Hahn decomposition on Wikipedia valid?
If $t_n=\sup=\infty,$ then the set of real numbers $\{\mu(B):\,\ldots\}$ is not bounded from above and so for every $k\in \mathbb{N}$ you can find an admissible set $C_k\subset A_n$ such that $\mu(C_k)\ge k$. If you take $B_n=\cup_k C_k$, then $B_n\subset A_n$ and $\mu(B_n)\ge \mu(C_k)\ge k$ for every $k$ so $\mu(B_n)=\infty\ge \min\{1,\infty\}=1$. Note that $\mu(D)$ is finite so what WP writes after is fine if the series diverges.
$S_4$ does not have a normal subgroup of order 8?
Hint: The sizes of the conjugacy classes of $S_4$ are: $1,6,8,6,3$. A normal subgroup is a union of conjugacy classes and contains the identity.
Filling a conical tank
Think about what is happening. The water (volume) is being poured in at a constant rate. This relates to how the water level ($h$) changes and how the width of the water in the tank at that level ($r$) changes. Further, $r$ and $h$ are related. The volume of a cone is $$V = \frac13 \pi \, r^2 \, h$$ How is $h$ related to $r$? You know that, at the top, the radius is $2$ and $h=4$. Because this is a cone, we can say that $r = h/2$ at all levels of the cone. Thus, $$V(h) = \frac{1}{12} \pi \, h^3$$ We may then differentiate with respect to time; use the chain rule here $$\frac{dV}{dt} = \frac{\pi}{4} h^2 \frac{dh}{dt}$$ You are given $dV/dt$ and the height $h$ at which to evaluate; solve for $dh/dt$.
Binomial within a multinomial distribution
Note that $X=(X_1,...,X_k)=Y_1+\cdots Y_n$ where the $Y$'s are i.i.d. with $$Y_1=(X_{1,1},...,X_{k,1}) \sim \text{Multinomial}(p_1,..,p_k,1).$$ Denote $S_{j,1}=X_{1,1}+\cdots X_{j,1}$ for $j<k$. Then we see that \begin{align*} P(S_{j,1}=1) &= P(S_{j,1}=1,S_{n,1}=1)=P(\bigcup_{k=1}^j(X_{k,1}=1,X_{l,1}=0 \, \forall l\not=k)) \\ &=\sum_{k=1}^j P(X_{k,1}=1,X_{l,1}=0 \, \forall l\not = k) \\ &= \sum_{k=1}^j p_k \end{align*} and $$P(S_{j,1}=0)=1-\sum_{k=1}^j p_k.$$ Hence $$S_{j,i} \sim \text{Ber} \left(\sum_{k=1}^j p_k\right) \quad \quad \text{ for } i\in\{1,...,n\},$$ which furthermore by the independence of the Y's are mutually independent. Now note for $j<k$ that $$Y=X_1+\cdots X_j=S_{j,1}+\cdots +S_{j,n}\sim \text{Bin}\left(n,\sum_{k=1}^j p_k\right)$$
How do I find the radius of convergence of these power series
The formula for the convergence radius of a power series is $$\frac 1 R= \limsup_{n \to \infty} |a_n|^{1/n}.$$ In the first case $a_k =0$ if $ k \neq n!$ and $a_k =2^k$ if $k=n!$. Thus $$\frac 1 R=\lim_{k \to \infty} (2^k)^{1/k}=2.$$ In the second case $a_k=0$ if $k \neq n^2$ and $a_k=\frac k {2^\sqrt{k}}$ if $k=n^2$. Thus $$\frac 1 R=\lim_{k \to \infty }\left(\frac k {2^{\sqrt{k}}}\right)^{1/k}= \frac {\lim_{k \to \infty} k^{1/k}} {\lim_{k \to \infty} \left(2^{\sqrt{k}}\right)^{1/k}}=\frac 1 {\lim_{k \to \infty} 2^{k^{-1/2}}}=\frac 1 {2^{\lim_{k \to \infty} (k^{-1/2})}}=1.$$
Fundamental unit in the ring of integers $\mathbb Z[\frac{1+\sqrt{141}}{2}]$
$\sqrt{141}=[11,\overline{1,6,1,22}]$. Yes $95+8\sqrt{141}$ is a fundamental unit.
Find the center of circle given two tangent lines (the lines are parallel) and a point.
Let $(h, k)$ be the center of the circle then the distance of the center $(h, k)$ & $(-1, 6)$ will be equal to the radius ($r$) of circle $$r=\sqrt{(h+1)^2+(k-6)^2}\tag 1$$ Now, the perpendicular distance of the center $(h, k)$ from the line: $x-2y+8=0$ $$r=\left|\frac{h-2k+8}{\sqrt{1^2+(-2)^2}}\right|=\left|\frac{h-2k+8}{\sqrt 5}\right|\tag 2$$ the perpendicular distance of the center $(h, k)$ from the line: $2x+y+6=0$ $$r=\left|\frac{2h+k+6}{\sqrt{2^2+1^2}}\right|=\left|\frac{2h+k+6}{\sqrt 5}\right|\tag 3$$ from (2) & (3), $$|h-2k+8|=|2h+k+6|$$ $$h-2k+8=\pm(2h+k+6)$$ $$h+3k=2\tag 4$$ or $$3h-k+14=0\tag 5$$ from (1) & (2), $$\sqrt{(h+1)^2+(k-6)^2}=\left|\frac{h-2k+8}{\sqrt 5}\right|$$ $$(h+1)^2+(k-6)^2=\frac{(h-2k+8)^2}{5}$$ setting $k=3h+14$ from (5), one should get $$(h+1)^2+(3h+14-6)^2=\frac{(h-2(3h+14)+8)^2}{5}$$ $$h^2+2h-3=0\implies h=1, -3$$ hence, corresponding values of $k$ are $k=3(1)+14=17$ & $k=3(-3)+14=5$ Hence, the center of the circle is $\color{red}{(1, 17)}$ or $\color{red}{(-3, 5)}$ Note: It can be checked that for $h+3k=2$ from (4), there is no real values of $h$ & $k$
Calculating an integral derived from the convolution of two Fourier transforms
This is an improper integral near $u=0$. The numerator goes to 1 near $u=0$ but the denominator makes the integral divergent like $\int_0^1 u^{-2}du$ . I suggest revisiting the two original functions whose Fourier transforms yielded your integral to see that they are in fact integrable and that their convolutions therefore is defined. (For example, are the two functions $L^1$ integrable?) Another suggestion: calculate or estimate your integral over the real line but with the interval $[-\delta,\delta]$ removed. Then see if the limit as $\delta\to0$ exists. Good luck! Sam PS, to make matters simpler, take $\beta=K=\mu=0$ and $\alpha=2, \sigma=1$. Might be easier to see what's going on in order to determine whether it can exist. PSS, my comment has become obsolete since the integral keeps changing.
Image of the Euler phi function
No, $\phi(n)$ does not attain all (even) positive integer values, and the the list of even numbers which are not attained can be found here: http://oeis.org/A005277. It starts with $$ 14, 26, 34, 38, 50, 62, 68, 74, 76, 86, 90, 94, 98, 114, 118, 122, 124, 134, 142, $$ Of course, odd values $>1$ are never attained, because $\phi(n)$ has a factor $2$ if $n\ge 3$, because $\phi$ is multiplicative and $\phi(p^n)=p^n-p^{n-1}$ is even for an odd prime $p$, as well as $\phi(2^n)=2^n-2^{n-1}$ for $n\ge 2$.
What is this notation called and how does it work?
The notation $$ s_{\overline n | i} $$ is shorthand for the expression $$ \frac{(1+i)^n-1}i$$ which is the amount of an annuity of $1$ per payment interval for $n$ intervals. It is usually read as "$s$-angle-$n$ at $i$". For example of usage, see page 245 of this excerpt.
In a rhomboid $ABCD$ an angle bisector is $DM$ ($M\in BC)$. If $AB=6$, compute the length of the segment that joints the midpoints of $AM$ and $BD$
If $DM$ is an angle bisector so $M\equiv B$, which gives the answer: $$\frac{6}{2}=3.$$
Permutation multiplication in S3
$(23)(12)$ maps $1$ to $2$ to $3$, $3$ to $2$, and $2$ to $1$, so it's $(132)$.
Find the remainder for $\sum_{i=1}^{n} (-1)^i \cdot i!$ when dividing by 36 $\forall n \in \Bbb N$
Hint: For $n\geq 6$ one has: $\sum\limits_{i=1}^n(-1)^ii! = \sum\limits_{i=1}^5(-1)^ii! + \sum\limits_{i=6}^n(-1)^ii!$ Next, notice that for all $i\geq 6$ one has $i!=1\cdot \color{red}{2\cdot 3}\cdot 4\cdot 5 \cdot\color{red}{6}\cdots (i-1)\cdot i$ implying that for $i\geq 6$ one has $36$ divides evenly into $i!$. What does the right sum contribute to the remainder when divided by $36$ then? From here it should be easy enough to brute force the remainder of the solution.