title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
$\bigcup\limits_{i=1}^n A_i$ has finite diameter for each finite $A_i$
It is not true that $\operatorname{diam}\left(\bigcup\limits_{i=1}^n A_i\right)\leq \operatorname{diam}(A_1)+\operatorname{diam}(A_2)+\cdots+\operatorname{diam}(A_n)$. For example, suppose the diameter of each of ten sets is two inches. But one of those ten sets is in Constantinople and another is in Adelaide. Pick a point $a_i$ in each of the sets $A_i$. Find the the two indices $k,\ell$ such that $d(a_k,a_\ell) = \max \{ d(a_i,a_j) : i,j \in \{ 1,\ldots,n \} \}$. Given points $x\in A_k$, $y\in A_\ell$, we have $$ d(x,y) \le d(x,a_k)+d(a_k,a_\ell) + d(a_\ell, y) \le \underbrace{\operatorname{diam}(A_k) + d(a_k,a_\ell) + \operatorname{diam}(A_\ell)}. $$ Then show that the quantity over the $\underbrace{\text{underbrace}}$ is an upper bound on the diameter of the union.
how to show that $\frac{s}{1+s^2}$ is Lipschitz continuous?
We have \begin{align*} \lvert F(s) - F(t) \rvert = \frac{\lvert s(1+t^2) - t(1+s^2) \rvert}{(1+s^2)(1+t^2)} = \frac{\lvert (1-st)(s-t) \rvert}{(1+s^2)(1+t^2)}. \end{align*} Now using the AM-GM inequality, $$ \lvert 1-st \rvert \leq 1+|st| \leq 1+\frac{s^2+t^2}{2} \leq 1+s^2+t^2+s^2t^2 = (1+s^2)(1+t^2), $$ and so, it follows that $$ \lvert F(s) - F(t) \rvert \leq \lvert s - t \rvert. $$ Addendum. If the mean-value theorem is available to you, the proof can be made a a bit easier (or more routine, to be precise): there exists $\xi$ between $s$ and $t$ such that $$ \left| F(s) - F(t) \right| = \left| F'(\xi) \right| \left| s - t \right| = \left|\frac{1-\xi^2}{(1+\xi^2)^2} \right| \left|s - t\right|, $$ and it is clear that the prefactor is bounded from above by $1$, hence we get $\leq \left| s - t\right|$.
A Sylow $p$-subgroup can be a subset of union of the rest of the Sylow $p$-subgroups?
This is not true in general: take $G=S_3$, then $Syl_2(G)=\{\langle (1 2) \rangle,\langle(1 3)\rangle,\langle(2 3)\rangle\}$. Clearly each of these subgroups is not contained in the union of the others.
How to prove $\Bbb Z[i]/(1+2i)\cong \Bbb Z_5$?
I think it is easier to do it this way. Let $\varphi:\Bbb Z[x]\to \Bbb Z[i]$ sending $x\mapsto i$. Then the preimage of the ideal $I=(1+2i)=(i-2)$ of $\Bbb Z[i]$ under $\varphi$ is the ideal $J=(x^2+1,x-2)$ of $\Bbb Z[x]$. Thus $\varphi$ induces the isomorphism $$\Bbb{Z}[i]/I\cong \Bbb{Z}[x]/J.$$ Now observe that $$J=(x^2+1,x-2)=(x^2+1,x-2,x^2-4)=(5,x-2).$$ This shows that$$\frac{\Bbb Z[i]}{(1+2i)}=\frac{\Bbb{Z}[i]}{(i-2)}\cong \frac{\Bbb Z[x]}{(x^2+1,x-2)}=\frac{\Bbb{Z}[x]}{(5,x-2)}\cong\frac{\Bbb{Z}[x]/(x-2)}{(5,x-2)/(x-2)}.$$ Because under the isomorphism $\Bbb{Z}[x]/(x-2)\cong \Bbb Z$ (which sends $p(x)+{\color{red}{(x-2)}}$ to $p(2)$), $(5,x-2)/(x-2)$ is the ideal $5\Bbb{Z}$ of $\Bbb Z$, we get $$\frac{\Bbb Z[i]}{(1+2i)}\cong\frac{\Bbb Z}{5\Bbb Z},$$ as required.
Fixed point Iteration method with parameters
Close to the fixed point the linearization is $$x_{n+1}=g(α)+g'(α)(x_n-α)\implies x_{n+1}-α=g'(α)(x_n-α).$$ For fast convergence you want to have $|g'(α)|$ as small as possible (and smaller than $1$ for any convergence at all). Now $$ g'(x)=\frac{3x^2}k+1, $$ so ideally you need $k=-3α^2=-\sqrt[3]{108}$, but any value close to it will do, for instance $k=-5$ (as $5^3=125$). The interval on which $g$ is contracting is given by $\frac35x^2< 2\implies |x|<\sqrt{\frac{10}3}$, which is true for $|x|\le\frac53$. For smaller contraction factors the interval will be correspondingly smaller, for $q=\frac12$ this gives $\frac56\le x^2\le \frac52$, etc.
if 2 vector spaces whose different dimensions are these isomorphic
Hint: if $f\colon V\to W$ is an isomorphism, then $f$ maps a basis of $V$ to a basis of $W$.
How do I allocate probabilities to appropriate events?
Let $S^+$ denote that the test shows positive, $S$ the is accurate, $\bar S$ the test was inaccurate and $T$ person has the disease, $\bar T$ does not have the disease. Then the probability of interest is \begin{align*} P(T|S^+) &= \frac{P(TS^+)}{P(S^+)}\\ &=\frac{P(S|T)P(T)}{P(ST)+P(\bar S \bar T)}\\ &=\frac{P(S|T)P(T)}{P(S|T)P(T)+P(\bar S|\bar T)P(\bar T)}\\ &=\frac{(.99)(.01)}{(.99)(.01)+(.01)(.99)}\\ &=\frac{1}{2}. \end{align*}
How much bigger is 3↑↑↑↑3 compared to 3↑↑↑3?
By definition, $3\uparrow\uparrow\uparrow\uparrow 3 = 3\uparrow\uparrow\uparrow(3\uparrow\uparrow\uparrow(3\uparrow\uparrow\uparrow 3))$. \begin{matrix} 3\uparrow\uparrow\uparrow 3= & \underbrace{3^{3^{3^{3^{\cdot^{\cdot^{\cdot^{\cdot^{3}}}}}}}}} \\ & \mbox{7,625,597,484,987 copies of 3} \end{matrix} which should make it a little clearer how much bigger $3\uparrow\uparrow\uparrow\uparrow 3$ is than $3\uparrow\uparrow\uparrow 3$. It's hard to even give a more concrete answer because the numbers of powers of $3$ get unreasonably large to compute or type.
Mobius function vanishes over sum of totient numbers
Fix $k$ and consider the set $S_k$ of square free natural numbers $n$ such that $\varphi(n)=k$. Note: we only care about the square free $n$ as the rest only contribute $0$ to the sum. There is a pairing within $S_k$ which maps the even members to the odd ones: pick $n\in S_k$ if $n$ is odd, pair it with $2n$. If $n$ is even pair it with $\frac n2$ (as $n$ is square free, $n$ even implies that $\frac n2$ is odd). We note that in each such pair, one member of the pair has an even number of prime factors while the other has an odd number of prime factors. This suffices to prove your claim.
A boolean algebra is countable iff the Stone space is separable
You cannot show what you want: let $X$ be a separable compact zero-dimensional space that is not second countable (the double arrow will do, or $\{0,1\}^{\omega_1}$ e.g.), then its clopen algebra is a counterexample to your claim. In fact $B$ is countable iff $\textrm{St}(B)$ is second countable, where the reverse is shown by refining the standard clopen base $N(b), b \in B$ to a countable one.
cost of providing school dinners
The solution is very poorly explained. From the data for $200$ students we know that $C+200x=9\cdot200=1800$, and from the data for $300$ students we know that $C+300x=8\cdot300=2400$. We therefore have a system of two equations in two unknowns, $$\left\{\begin{align*} &C+200x=1800\\ &C+300x=2400\;. \end{align*}\right.$$ Solve this system by any method that you know. I’d simply subtract the first equation from the second to get $100x=600$ and therefore $x=6$, so that $C+1800-200\cdot6=600$. This means that if we have $N$ students, the total cost is $$C+Nx=600+6N\;,$$ and the cost per student is therefore $$\frac{600+6N}N=\frac{600}N+6\;.$$ $\frac{600}N>0$ no matter how many students we have, so the cost per student is always greater than $6$. On the other hand, we can make $\frac{600}N$ as small as we like by taking $N$ large enough, so we can make the cost per student as close to $6$ as we want. For instance, if $N>600$, then $\frac{600}N<1$, and the cost per student is $\frac{600}N+6<7$.
If $1-\frac{f(x)}x$ converges to $0$ on (finite) iteration, when does $f(x)$ also converge on iteration?
Let $ f $ be continuous. If after $n$ iterations of $ g(x) = 1 - \frac{f(x)}{x} $ converges to $0$ then we consider what happens after $ n + 1$ iterations. After $ n+1 $ iterations we have $g(0) = 1 - \frac{f(0)}{0} = 1 - f’(0) $. Since $0$ is an attracting point of $g$ we know $g ‘ (0) < 1$. Also we know that after $2n$ iterations we also get $0$. Notice that since all $x$ map to the same number after a while , this implies that we are dealing with a function $ g $ that is not analytic unless $n = 1$. If $ n = 1$ then $f(x) = x$. In that case $f ‘(0) = 1 $ and iterations of $f$ converges immediately. After $ n - 1 $ iterations we have $ q$ that satisfies $ 1 - f(q)/q = 0 $ so $f(q)/q = 1$. This implies that $f$ has a fixpoint ( $q$ ). Since $ q $ does not depend on $x$ but on $ f $ here , yet at the same time is the result of iterations started at $x$ we must conclude that $f(x) = x$ Notice that since $g$ is not analytic , this is the only analytic possibility for $f$. This also explains $ f(0)/0 = f’(0) = 1$ and thus after $ n+1 $ iterations we get again the value $ 1 - 1 = 0 $. Notice that since $q$ does not depend on $x$ we can not meaningfully talk about values of $ n - 2 $ iterations distinct from values of $ 0 , 1 , n-1 , n , n+1 $ iterations if $ n-2 > 0 $. This forces $n = 1$. This makes sense since for $ n = 1 $ or equivalently $f(x) = x $ we get $g(g(x)) = 0 $ and $ g ‘ (0) = 0 $ as expected for a nonchanging function ( since $ n + 1 $ also gives $ 0 $ ). Note that any nonlinear $ f $ would make $ g $ analytic , which would be a paradox. Also note that we know $ n , 2n , n+1 $ give the same value $0$ hence $ n = 1$. —- —- perhaps a more interesting question is what happens when the limit of g exists but never approaches it in finite steps. When does that imply $ f $ converges ? I think I know that too. —— I assume $ f$ needs to be continuous for the limit of $ g $ to be $ 0 $ and actually reaching that value in finite steps. —— I am aware that the OP probably meant convergence in finite steps ( $n$ ) depending on $x$. I took it as nondepending. Consider convergence after $ n $ steps for $x$ and $ m $ steps for $y$ , then both $ x,y$ converge after $ n m $ steps hence also a finite number. By analogue for $ t $ imput values $ x_1,x_2,...$ we get convergence after the smallest common multiple ( scm) $ scm(x_1,x_2,...) $ Since we consider convergence for ALL imput in a finite number of steps the scm of an INFINITE amount $t$ must still be FINITE , hence I can use $ “ n “ $ for all imput $x$. Hence the justification. Notice this also extends to complex numbers. And - I think - finite groups.
How many subsets contain no 3 consecutive elements?
To summarize the discussion in the comments: The recursion: $$a_n=a_{n-1}+a_{n-2}+a_{n-3}\quad a_1=2\quad a_2=4 \quad a_3=7$$ has characteristic polynomial $$x^3-x^2-x-1$$ which doesn't have particularly pleasant roots, which means that there isn't a simple closed formula for the results. The sequence $$\{a_n\}=\{2,4,7,13, 24, 44, 81, \cdots \}$$ is part of A000073 in oeis.org and that link contains a lot of information on the sequence, though of course it does not present a simple closed formula. Note that the characterization given there as the number of binary sequences of length $n$ for which the string $000$ does not appear is entirely equivalent the one given by the OP here, as each string corresponds to a subset (where a $1$ indicates that the entry is absent and a $0$ indicates that it is present).
Pittnauer’s Fixed Point Theorem
The uniqueness is easy. For the existence, define $x_p=T^p(x_0)$ for any $x_0$ and show that for $p \geq n$, $d(Tx_p,Tx_{p+1}) \leq kd(x_p,T^nx_{p-n})+kd(x_{p+1},T^nx_{p-n})$. As a consequence, $\sum_p{d(x_p,x_{p+1})}$ converges, thus $(T^p(x_0))_p$ converges and the rest is standard.
Linear Algebra Projections and orthogonality
The definition of projection is $$pr_\vec{a}(\vec{b}) = \frac{(\vec{a},\vec{b})}{(\vec{a},\vec{a})}\vec{a}$$ where $(\vec{a},\vec{b})$ is an inner product between $\vec{a}$ and $\vec{b}$. (In this case, we have the dot product). You can conclude with a geometric intuitive argument that $\vec{b}-pr_\vec{a}(\vec{b})$ is orthogonal to $\vec{a}$. Just draw a right triangle with $\vec{b}$ as it's hypotenuse and cathetus $pr_\vec{a}(\vec{b})$ (on the line through $\vec{a}$). $\vec{b}-pr_\vec{a}(\vec{b})$ will be the other cathetus, hence orthogonal to $\vec{a}$. To prove orthogonality, just show that $(\vec{b}-pr_\vec{a}(\vec{b}),\vec{a})=0$. EDIT: To find the transformation matrix, just compute the transformation of the canonical basis and put these vectors as columns on $P$, and you can prove why it works really easy. For exaple, if you apply the transformation $P$ on the vectors $e_1=(1,0,0)$,$e_2=(0,1,0)$ and $e_3=(0,0,1)$ you can obtain, say, $Pe_1=v_1,Pe_2=v_2,Pe_3=v_3$. All you have to do is build the matrix: $$P = \begin{bmatrix}v_1 v_2 v_3 \end{bmatrix} $$ that is, $v_1$ is the first column, $v_2$ is the second column and $v_3$ the third column. Well, obviously you can't use the matrix $P$ to compute $Pe_1$ and so on. These transformations you must compute by hand, using the projection and orthogonality concepts introduced before.
Probability to win one prize in 5 draws from 100 tickets with one prize for each draw
Since you hold one ticket, the five prizes must be chosen from the other tickets. Thus the probability that you do not win any prize is $\dfrac{\binom{99}{5}}{\binom{100}{5}}$ and the probability that you win one prize is $1 - \dfrac{\binom{99}{5}}{\binom{100}{5}}$
The additive property of mutual information
$$I(X;Y_1,Y_2)=H(Y_1,Y_2)-H(Y_1,Y_2|X)=H(Y_1)+H(Y_2)-I(Y_1;Y2)-H(Y_1,Y_2|X)$$ $$I(X;Y_1)=H(Y_1)-H(Y_1|X)$$ $$I(X;Y_2)=H(Y_2)-H(Y_2|X)$$ Thus $$\begin{align} I(X;Y_1,Y_2)-\left(I(X;Y_1)+I(X;Y_2)\right)&=H(Y_1|X)+H(Y_2|X)-I(Y_1;Y2)-H(Y_1,Y_2|X)\\ &=I(Y_1;Y_2|X)-I(Y_1;Y_2) \end{align}$$ But $I(Y_1;Y_2|X)-I(Y_1;Y_2)$ which is called interaction information can be positive, negative, or zero. See here and here for more information on positive and negative interaction information.
Find function corresponding to a Taylor series
As you saw in the comments already, you can differentiate the series, $f(x) = \sum_{n=2}^\infty\frac{x^n}{n(n-1)}$ twice to see $f''(x) = \sum_{n=2}^\infty x^{n-2}$. You can now do a substitution of $m=n-2$ to see $f''(x) = \sum_{m=0}^\infty x^m$. This is the geometric series $g(x) = \frac{1}{1-x}$. From this, you can integrate twice to find $f(x)= x + \ln(1-x)-x\ln(1-x)$.
Closed form of $\int_0^x \frac{d y}{\sqrt{y^2+n^2}}\ln\frac{\sqrt{y^2+n^2}+y}{\sqrt{y^2+n^2}-y}$
The integral can be written as $$I_n=\int_0^x \frac{\log \left(\sqrt{n^2+y^2}+y\right)-\log \left(\sqrt{n^2+y^2}-y\right)}{\sqrt{n^2+y^2}}\, dy$$ Substitute $\sqrt{n^2+y^2}+y=t$ and as $y>0$ we get $y=\frac{t^2-n^2}{2 t}$ Differentiating $dy=\dfrac{n^2+t^2}{2 t^2} \,dt$ The interval of integration becomes $\left(n;\;x+\sqrt{n^2+x^2}\right)$ The integral becomes $$I_n=-2 \log n\int_n^{x+\sqrt{n^2+x^2}} \frac{dt}{t} =2 \log n \left(\log n-\log \left(x+\sqrt{n^2+x^2}\right)\right)$$ Hope this helps
$ \frac{x_1}{1+x_1^2} + \frac{x_2}{1+x_1^2+x_2^2} +...+\frac{x_n}{1+x_1^2+x_2^2+...x_n^2} \le \sqrt{n}$ for $x_i > 0$
Let $x_0=0$. Thus, by C-S $$\sum_{k=1}^n\frac{x_k}{1+\sum\limits_{i=1}^kx_i^2}\leq\sqrt{n\sum_{k=1}^n\frac{x_k^2}{\left(1+\sum\limits_{i=1}^kx_i^2\right)^2}}\leq$$ $$\leq\sqrt{n\sum_{k=1}^n\frac{x_k^2}{\left(1+\sum\limits_{i=0}^{k-1}x_i^2\right)\left(1+\sum\limits_{i=1}^kx_i^2\right)}}=\sqrt{n\left(\frac{\sum\limits_{k=1}^nx_k^2}{1+\sum\limits_{k=1}^nx_k^2}\right)}\leq\sqrt{n}.$$
How to solve an integral with the use of arcsine
If you haven't studied integration by parts or trig substitution, you can do it another way: $y=\sqrt{a^2-x^2}$ is the equation of the upper half of the circle of radius $a$, centered at the origin. So, to ask for $\int_{-a}^x \sqrt{a^2-t^2}\,dt$ is the same thing as asking for the area of the region below the circle between $-a$ and $x$, which, if you draw the picture, amounts to computing: the area of a circular sector: $\frac{1}{2}a^{2}\theta=\frac{1}{2}a^{2}\sin ^{-1}\left ( \frac{x}{a} \right )$ and the area of a triangle: $\frac{1}{2}x\sqrt{a^2-x^2}$ Now add these to get the result.
Other inner products for $\mathbb{R}^n$
Yes, for $n > 1$. For any $n \times n$ matrix $A$, $$\phantom{(\ast)} \qquad \langle {\bf x}, {\bf y}\rangle := {\bf y}^{\top} A {\bf x} \qquad (\ast)$$ defines a bilinear form on $\Bbb R^n$. If $A$ is symmetric, then so is the bilinear form, i.e., $$\langle {\bf y}, {\bf x}\rangle = \langle {\bf x}, {\bf y}\rangle ,$$ and in that case all of the eigenvalues of $A$ are real. We can show that the bilinear form $(\ast)$ is in fact an inner product iff the eigenvalues of $A$ are all positive, so to establish the existence of an inner product not of the form $\langle {\bf x}, {\bf y}\rangle = \sum_{i = 1}^n \lambda_i x_i y_i$ it's enough to find a symmetric matrix $A$ that is not diagonal but whose eigenvalues are all positive. A simple example is $$\pmatrix{1&\epsilon\\\epsilon&1\\&&1\\&&&\ddots\\&&&&1} ,$$ which corresponds to the bilinear form $$\langle {\bf x}, {\bf y}\rangle = x_1 y_1 + \cdots + x_n y_n + \epsilon (x_1 y_2 + x_2 y_1) . $$ The eigenvalues of this matrix are $1 - \epsilon, 1, 1 + \epsilon$, so via $(\ast)$ this bilinear form is an inner product iff $0 < |\epsilon| < 1$. Remark Conversely, all inner products can be written as $(\ast)$ for some symmetric matrix $A$, and we can recover $A$ by setting $$A_{ij} = {\bf e}_i \cdot {\bf e}_j$$ for the standard basis $({\bf e}_i)$. On the other hand, given any inner product on $\Bbb R^n$, applying the Gram-Schmidt Process produces an orthonormal basis $({\bf f}_i)$, so the matrix representation of the inner product with respect to that basis is the identity matrix, $I_n$. In this sense, all inner products on $\Bbb R^n$ are equivalent.
Matrix of the projection
Hint: Recall that for example the first column of $A$ is given by $Ae_1$. So $$Ae_1 = e_1 - (e_1 \cdot \vec n) \vec n = (1,0,0)^T - \frac{1}{\sqrt 2} (\frac{1}{\sqrt 2} , 0, -\frac{1}{\sqrt 2})^T = (1,0,0)^T + (-\frac 12 , 0, \frac 12)^T$$
How to solve where an equation where the variable is in the exponent?
Consider that you look for the zero's of function $$f(n)=2^{n/8}-n$$ for which $$f'(n)=2^{\frac{n}{8}-3} \log (2)-1\qquad \text{and} \qquad f''(n)=2^{\frac{n}{8}-6} \log ^2(2)\quad > 0 \quad \forall n$$ The first derivative cancels at $$n_*=24-\frac{8 \log (\log (2))}{\log (2)}\approx 28.2301$$ This point is a minimum by the second derivative test and $$f(n_*)=\frac{8 \left(1+\log \left(\frac{\log (2)}{8}\right)\right)}{\log (2)}\approx -16.6886$$ So, two roots. To approximate the roots, build a Taylor expansion around $n_*$ to get $$f(n)=\frac{8 \left(1+\log \left(\frac{\log (2)}{8}\right)\right)}{\log (2)}+\frac{\log (2)}{16} \left(n-n_*\right)^2+O\left(n-n_*)^3\right)$$ This gives two solutions $8.60300$ and $47.8573$. Now, let us start Newton method with these estimates. The iterates would be For the first root $$\left( \begin{array}{cc} k & n_k \\ 0 & 8.6029995 \\ 1 & 0.6563652 \\ 2 & 1.0991250 \\ 3 & 1.0999970 \end{array} \right)$$ For the second root $$\left( \begin{array}{cc} k & n_k \\ 0 & 47.857262 \\ 1 & 44.427279 \\ 2 & 43.601473 \\ 3 & 43.559365 \\ 4 & 43.559260 \end{array} \right)$$ As said in comments, there is explicit solution in terms of Lambet function $$-\frac{8 W_0\left(-\frac{\log (2)}{8}\right)}{\log (2)} \qquad \text{and} \qquad -\frac{8 W_{-1}\left(-\frac{\log (2)}{8}\right)}{\log (2)}$$
With standard dot product find all isometries
Hint: $(0,0,0)$ must be mapped to a multiple of $(0,1,0)$. We are also given that $f[(0,0,1)] = (0,1,0)$. Deduce that, since $f$ must be an isometry, there are two valid choices for $f[(0,0,0)]$. What are they? In each case, consider the linear isometry $\ell(x) = f(x) - f[(0,0,0)]$. We now know (in each case) exactly what $\ell$ does to $(0,1,0)$ and $(0,0,1)$. What are the two possibilities for $\ell[(1,0,0)]$?
$\sum^\infty_{n=0}\frac {x^{4n}}{(2n)!}$ is Taylor's series of which function?
Hint: $$e^y=\sum_{r=0}^\infty\dfrac{y^r}{r!}$$ Replace $y$ with $x^2,-x^2$ one by one to find $e^{x^2}+e^{-x^2}$
Solve for $p$ in this equation $\;2^{p−1}(2^p − 1) = X$.
Hint Set $Y=2^{p-1}$ then you equation is equivalent at $$Y(2Y-1)=X.$$
The definition of Borel sigma algebra
$(x,y) = \displaystyle\bigcup_{\substack{a,b\in\mathbb{Q} \\ x<a<b<y}} (a,b)$.
Trace minimization subject to constraints
First, let $L=(\sqrt{\Sigma})^{-1}H$ the minimization problem becomes $$ \min_{K:K\sqrt{\Sigma}\cdot L=I}{\rm tr}((K\sqrt{\Sigma})(K\sqrt{\Sigma})^T) =\min_{S:SL=I}\,{\rm tr}(SS^T) =\min_{S:SL=I}\,\Vert S\Vert^2_{HS} $$ Where $\Vert\cdot\Vert_{HS}$ is the Hilbert-Schmidt norm. This minimization problem is well-known, and its solution goes back to Penrose:"On best approximate solutions of linear matrix equations". The minimum is attained at a unique $S$ which is the pseudo-inverse of $L$. In our case : $S^{+}=(L^TL)^{-1}L^T$ because $L$ has full rank. Rolling back we see that the solution of the original minimization problem is given by $\tilde K=(H^T\Sigma^{-1}H)^{-1}H^T\Sigma^{-1}$.
Let $H$ a separable Hilbert Space with a ortonormal base $\{e_n: n \in\mathbb{N}\}$. Prove that $\exists ! T\in \mathcal{B}(H): T(e_n)=d_ne_n.$
You're thinking in the opposite direction. Given an orthonormal basis (really you can get away with maximal linear independent set) and a bounded linear operator, defining the operator on the basis is enough to uniquely identify it. Define $T$ on the basis such that $Te_n = d_n e_n$. Then show that $T$ extends to the whole space (boundedness is important here), then show that $T$ is unique.
Moments of a symmetric stable distribution
I will show that $p^{th}$ moment exists for $p<\alpha$. $\newcommand{\P}{\mathbb{P}} \newcommand{\E}{\mathbb{E}}$ Recall/prove following inequality that is being used in the proof of Levy-Cramer's/Levy's continuity Theorem $$\P(|X|\geq t) \leq At\int_{-1/t}^{1/t}(1-\varphi(s))ds$$ Use it to bound tails $$ \P(|X|\geq t) \leq At\int_{-1/t}^{1/t}(1-e^{-c|s|^\alpha})ds \leq 2At\int_0^{1/t}cs^\alpha ds = \frac{2Ac}{\alpha+1}t^{-\alpha} =Bt^{-\alpha} $$ Use derived bound to show that $p^{th}$ moment exists $$ \E|X|^p = \int_0^\infty pt^{p-1}\P(|X|\geq t)dt < \infty \Leftrightarrow\\ \int_1^\infty pt^{p-1}\P(|X|\geq t)dt < pB\int_1^\infty t^{p-1}t^{-\alpha}dt < \infty \Leftrightarrow p-1-\alpha < -1 \Leftrightarrow \\ p < \alpha $$
Is this true $(a*b)^q =(c*d)^q \implies a*b=c*d?$
No, consider the cyclic group $\mathbb{Z} / q\mathbb{Z}$. Then the left-hand side is always satisfied whereas the right-hand side in general is not.
Given a $\triangle ABC$ construct another triangle with sides measuring the inverses of the altitudes of $\triangle ABC$
What is your definition of $1$ here? There is no inherent unit in the geometric plane. If you just want line segments that are inversely proportional to the given sides, one way is to choose an arbitrary point and three line segments from that point with lengths of your given sides. Then construct the circle through the other endpoints and extend the segments into chords. The extended segments will be inversely proportional to the given lengths. In the diagram, the red segments are the lengths of the sides of the triangle, and the blue segments are inversely proportional to the red ones. If you do have a unit, the following diagram shows how to take three reciprocals. Use an arbitrary angle with vertex $O$. The distances from $O$ to the red points $A'$, $B'$ and $C'$ are the lengths for which to find the reciprocals, and the distance from $O$ to $U'$ is the unit. These points are all on the same ray. A circle determines $U''$ on the other ray. Line $\overline{U'A''}$ is parallel to segment $\overline{U''A'}$, and so on. The distance $OA''$ is then the reciprocal of distance $OA'$, and so on. The first method here is more elegant and does not rely on a unit but does not give reciprocals.
Relationship between spectral rays commuting
I recently wrote up a solution to this very problem, which I've copied below. Note that in this context, $\mathfrak A$ is the space of bounded operators on $B$. I hope you find this helpful. $ \newcommand{\f}{\mathfrak} \DeclareMathOperator{\rad}{r} \newcommand{\eps}{\varepsilon} \DeclareMathOperator{\spec}{spec} \newcommand{\lp}{\left(} \newcommand{\rp}{\right)} $ Let $A,B \in \f A$. We which to show that $\rad(A + B) \leq \rad(A) + \rad(B)$. Let $\eps > 0$ be arbitrary. Noting that $\|A^n\|^{1/n} \to \rad(A)$, we may select a an $m$ such that for all $p \geq m$, we have $$ \|A^p\| \leq (\rad(A) + \eps)^p, \qquad \|B^p\| \leq (\rad(B) + \eps)^p $$ Since $A$ and $B$ commute, the binomial theorem applies. So, we have for $n > 2m$ \begin{align*} \|(A + B)^n\| & \leq \left \| \sum_{p=0}^m \binom np A^pB^{n-p} \right\| \\ & \quad + \left \| \sum_{p={m+1}}^{n-m-1} \binom np A^pB^{n-p} \right\| + \left \| \sum_{p=0}^m \binom np A^{n-p}B^{p} \right\| \\ & \leq \sum_{p=0}^m \binom np \lp \|A\|^p(\rad(B) + \eps)^{n-p} + (\rad(A) + \eps)^{n-p}\|B\|^p \rp \\ & \quad + \sum_{p=m+1}^{n - m - 1} \binom np (\rad(A)+\eps)^{p}(\rad(B) + \eps)^{n-p} \\ & \leq (\rad(B) + \eps)^n \overbrace{n^m}^{\binom np \leq n^p \leq n^m} \lp \sum_{p=0}^m \|A\|^p(\rad(B) + \eps)^{-p} \rp \\ & \quad + (\rad(A) + \eps)^{n} n^m \lp \sum_{p=0}^m (\rad(A) + \eps)^{-p} \|B\|^p \rp \\ & \quad + \sum_{p=0}^{n} \binom np (\rad(A)+\eps)^{p}(\rad(B) + \eps)^{n-p} % % \\ & = n^m (\rad(B) + \eps)^n c(A,B,m) + n^m (\rad(A) + \eps)^n c(B,A,m) \\ & \quad + (\rad(A) + \rad(B) + 2\eps)^n \end{align*} Thus, we have \begin{align*} \lim_{n \to \infty}\frac{\|(A+B)^n\|}{(\rad(A) + \rad(B) + 2 \eps)^n} &\leq \lim_{n \to \infty} n^m \lp \frac{\rad(B) + \eps}{\rad(A) + \rad(B) + 2 \eps} \rp^n c(A,B,m) \\ & \quad + \lim_{n \to \infty} n^m \lp \frac{\rad(A) + \eps}{\rad(A) + \rad(B) + 2 \eps} \rp^n c(B,A,m) % \\ & \quad + 1 = 1 \end{align*} Thus, we may conclude that $$ \rad(A + B) = \lim_{n \to \infty} \|(A+B)^n\|^{1/n} \leq \rad(A) + \rad(B) + 2\eps $$ Since $\eps$ was arbitrary, conclude that $\rad(A + B) \leq \rad(A) + \rad(B)$ as desired.
Prove or disprove: There exists an integer $k\geq 4$ such that $2k^2 -5k+2$ is a prime number
Hint: $$2k^2-5k+2=(2k-1)(k-2){}$$
Proving a relation is function
In general you could prove a relation "is" a function, by showing that every x has one and only one tuple $(x,y) \in \tau$. That however is not the case, since e.g. $(1,2) \in \tau$ and $(1,3) \in \tau$. The relation however could be a function in a adequate set e.g. $\{1\} \rightarrow \{1\}$
How to prove the Dedekind identity?
Hints: One direction is easy, both $U$ and $V\cap H$ are subgroups of $H$. For the other direction, every group element, hence every $h\in H$ can be written as $h=uv$. Use again $U\le H$, and by group properties conclude that $v\in H$.
Calderon-Zygmund decomposition
Here's one interpretation of how to carry out the decomposition for $f(x) = \delta_0(x)$, with $\alpha = 1$. I'll leave the other part as an exercise. The starting average value of $f$, over the full interval $[-\pi, \pi]$, is $(2\pi)^{-1} < \alpha$, so we proceed to subdivide the interval. On the child intervals, $[-\pi, 0]$ and $[0, \pi]$, $f$ has the average value $\pi^{-1} < \alpha$. So we subdivide again, obtaining $[-\pi, -\pi/2]$, $[-\pi/2, 0]$, $[0, \pi/2]$, and $[\pi/2, \pi]$. This time the average values are $0$, $2/\pi$, $2/\pi$, and $0$, still all less than $\alpha$. Subdividing a third time (I'm going to stop listing the intervals) yields averages of $0$, $0$, $0$, $4/\pi$, $4/\pi$, $0$, $0$, and $0$. The middle two intervals, $[-\pi/4, 0]$ and $[0, \pi/4]$, now have averages greater than $\alpha$, so we set those intervals aside. But the other intervals continue to be subdivided forever, with average values never exceeding $\alpha$ (in fact the averages are all $0$). The definition of $g$ is now potentially a bit hazy. Going back to Stein's exposition in Singular Integrals and Differentiability Properties of Functions (p. 31), we have that $g$ is defined to be equal to $f$ off of the intervals $[-\pi/4, 0]$ and $[0, \pi/4]$. So $g = 0$ on $[-\pi, -\pi/4)$ and $(\pi/4, \pi]$. On the interiors of the set-aside intervals, $g$ is defined to be the average value of $f$ on those intervals. So $g = 4/\pi$ on $(-\pi/4, 0)$ and $(0, \pi/4)$. Stein doesn't specify what should happen at $-\pi/4$, $0$, or $\pi/4$, because for ordinary functions that doesn't matter: that remainder has Lebesgue measure $0$, and for ordinary functions there's no mass on such a null set. But now, because we're dealing with a delta function, there is mass at one of these points. So at the moment it's a bit hazy. Moving on temporarily, we can safely say that $b$ is $0$ on $[-\pi, -\pi/4)$ and $(\pi/4, \pi]$. Further, though (again per Stein's exposition), the average value of $b$ on each interval $[-\pi/4, 0]$ and $[0, \pi/4]$ is supposed to be $0$. Now, we need to have $b = -4/\pi$ on $(-\pi/4, 0)$ and $(0, \pi/4)$ so that $f = g + b$ there. But then to make the average values work out to be $0$, we also need to include $\delta_0$ as a part of $b$ (and so not as a part of $g$). With that, we've essentially fully defined $g$ and $b$. There are still a few individual points where we haven't provided specific function values, but these should be immaterial for practical purposes as all of $f$'s mass is accounted for.
Generating function for number of partitions with only distinct even parts
$a_n$ is trivially zero for odd $n$ and a bijection between distinct even and distinct odd partitions exists for even $n$, so its generating function is simply that for distinct parts but substituting $x^2$ for $x$. Concretely this is a spaced-out OEIS A9: $$1+x^2+x^4+2x^6+2x^8+\dots$$
How do you determine whether the quadratic form is positive and negative definite?
Diagonalize. In this case, it comes down to completing the square.
$n^\text{th}$ integral of $\operatorname{li}(x)$
$\newcommand{\li}{\operatorname{li}}\newcommand{\Ei}{\operatorname{Ei}}$ Well I've finally figured it out. The $n^\text{th}$ integral of $\li(x)$ is equal to: $$\frac{1}{n!}\sum^n_{k=0}(-1)^k{{n\choose k}}x^{n-k}\Ei((k+1)\ln(x))$$ $$=\frac{1}{n!}\sum^n_{k=0}(-1)^k{{n\choose k}}x^{n-k}\li(x^{k+1})$$ or $$\sum^n_{k=0}(-1)^k\frac{x^{n-k}}{(n-k)!k!}\li(x^{k+1})$$
Find all the conjugates of primitive $35th$ root of unity over $\mathbb{F_{13}}$
All that’s missing from your argument is the realization that for an algebraic extension of $\Bbb F_q$, the Galois group is generated by Frobenius, $z\mapsto z^q$, valid for all $z$ in that algebraic extension. In particular, since here $q=13$, the conjugates of your primitive $35$-th root of unity $\zeta$ are $\{\zeta,\zeta^{13},\zeta^{169}=\zeta^{29},\zeta^{2197}=\zeta^{27}\}$ .
Why is two-element null semigroup excluded from the $0$-simple semigroup definition?
We can consider $S$ as $0$-simple, but then we must add in the Sushkevich-Rees theorem (and other theorems) the words: "except such $S$ that $S^2=0$". It is the same reason because of which the axiom "$0\ne 1$" is used in the theory of fields.
Find the probability limit of sequences of random variables
You don't even need Markov's inequality. For all $n > \sqrt{\epsilon}$ you have $$P(|X_n - 0| \ge \epsilon) = P(X_n \ge \epsilon) = \frac{1}{n} \to 0.$$
Question based on cost price, market price and discount
Hint: The profit in the first case is $P_1=\frac{3900-C_1}{C_1}=0.3$ And the profit in the second case $P_2=\frac{3900-C_2}{3900}=0.3$
Power Inequality for nonnegative number and power
I think the two relationship is correct because : $x^a>y^a$$\Rightarrow (x^{a}) {^{\frac{1}{a}}}>(y^{a}) {^{\frac{1}{a}}} $$\Rightarrow x>y$ And: $x^a>y^b$$\Rightarrow (x^{a}) {^{\frac{1}{a}}}>(y^{b}) {^{\frac{1}{a}}} $$\Rightarrow x>y^{\frac{b} {a}} $ But that is correct if $( a, b) \in \mathbb{R^{*+}} $and$ (x, y) \in \mathbb{R^{*+}} /({1}) $
Boolean expression for a problem
The hamming distance ($d_H$) of a bitstring is the total amount(distance) of differences between the bit-states (the addition modulo $2$ sum of all the digits). So if for example we take the $d_H$ of $A=0100_2$ and $B=1010$ we get $3$ as result, because there are three bits that differ between the strings. To compute the hamming distance using Exclusive-OR (XOR), we do it this way: $$d_H(a, b) = a \oplus b$$ where $\oplus$ is the XOR-operator. Its the same as addition modulo $2$ of the bits: $(a+b) \bmod 2$. In regards to $a$ and $b\in\mathbb{Z^+}$, where $a$ lies in the interval between $0$ and $2^n-1$, and $b=0000_2$, $d_H(a, b) > p$ turns us into a problem; The problem is that since $n$ can vary, it could be infinite in theory, and afaik there is no way to express that as a boolean expression. Such an expression would have an infinite amount of variables. You could probably express it as a mathematical function however. I havent done this. May be worth checking out Zhegalkin polynomial or Galois representation . But I am not sure if those help or will get you somewhere.
Find the minimum value of $P=\sum _{cyc}\frac{\left(x+1\right)^2\left(y+1\right)^2}{z^2+1}$
Let $x=y=z=1$. Hence, $P=24$. We'll prove that it's a minimal value. Indeed, by C-S $$\sum_{cyc}\frac{(x+1)^2(y+1)^2}{z^2+1}=\sum_{cyc}\frac{(x+1)^2(y+1)^2(x+y)^2}{(z^2+1)(x+y)^2}\geq\frac{\left(\sum\limits_{cyc}(x+1)(y+1)(x+y)\right)^2}{\sum\limits_{cyc}(z^2+1)(x+y)^2}.$$ Thus, it remains to prove that $$\left(\sum\limits_{cyc}(x+1)(y+1)(x+y)\right)^2\geq24\sum\limits_{cyc}(z^2+1)(x+y)^2.$$ Let $x+y+z=3u$, $xy+xz+yz=3v^2$ and $xyz=w^3$. Hence, $u=1$ and$$\sum\limits_{cyc}(x+1)(y+1)(x+y)=\sum_{cyc}(x^2y+x^2z+2x^2+2xy+2x)=$$ $$=9uv^2-3w^3+2u(9u^2-6v^2)+6uv^2+6u^3=3(8u^3+uv^2-w^3);$$ $$\sum\limits_{cyc}(z^2+1)(x+y)^2=2\sum_{cyc}(x^2y^2+x^2yz+x^2u^2+xyu^2)=$$ $$=2(9v^4-6uw^3+3uw^3+9u^4-6u^2v^2+3u^2v^2)=6(3u^4-u^2v^2+3v^4-uw^3).$$ Id est, it's enough to prove that $f(w^3)\geq0$, where $$f(w^3)=(8u^3+uv^2-w^3)^2-16(3u^6-u^4v^2+3u^2v^4-u^3w^3).$$ Now, $$f'(w^3)=-2(8u^3+uv^2-w^3)+16u^3=2w^3-2uv^2\leq0,$$ which says that $f$ is decreasing function. Thus, it's enough to prove our inequality for a maximal value of $w^3$, which happens for equality case of two variables. Let $y=x$ and $z=3-2x$. Hence, $$\left(\sum\limits_{cyc}(x+1)(y+1)(x+y)\right)^2\geq24\sum\limits_{cyc}(z^2+1)(x+y)^2$$ gives $$(x-1)^2(x^4-2x^3-11x^2+24x+4)\geq0.$$ which is obvious. Done!
Can $\cos(2x)+8\cos(4x)$ be simplified into $0.5A(\cos(a+b) + \cos(a-b))$?
No. Your purported right hand side has magnitude at most 2, but your left hand side ranges from -9 to 9.
"Show" that the direction cosines of a vector satisfies...
If $$x^2+y^2+z^2=r^2,$$ then $$\frac{x^2}{r^2}+\frac{y^2}{r^2}+\frac{z^2}{r^2}=1,$$ or $$\cos^2 \alpha + \cos^2 \beta + \cos^2 \gamma=1.$$
An improved inequality for the deficiency function when $\gcd(x,y)=1$, $x > 1$, and $y > 1$
(1) It looks correct to me. (2) You cannot get a better inequality than $$D(x)D(y) - D(xy) \ge 2$$ since the equality of this inequality holds when $a,b$ are distinct primes.
If $f_n(t):=f(t^n)$ converges uniformly to continuous function then $f$ constant
Yes it is correct. Maybe the argument used to show that $f(\overline{x}) = f_n(\overline{x}^{1/n}) \to g(1)$ should be detailed a little bit more. We use the fact that if $f_n\to g$ uniformly and $x_n\to x$, then $f_n(x_n)\to g(x)$. By the way, it seems it is the only place where we use uniform convergence.
Tautological Implication of Conditional Statements
Yes. Just informally: For the first one: if $P \land Q$, then $Q$, and thus (given $Q \rightarrow R$) you get $R$. So you have $Q$ and $R$, and so $Q \land R$ For the second: if $P \lor Q$, then either $P$ or $Q$ (or both). If $Q$, then certainly $Q \lor R$, and if $P$, then (given $P \rightarrow Q$) we get $Q$, so again we get $Q \lor R$
A certain kind of set of integers
Yes, it's true. It follows from (the finitary version of) Szemerédi's theorem: if a set $A\subseteq\mathbb N$ has positive upper density, then for every positive integer $k$ it contains an arithmetic progression of length $k;$ see Endre Szemerédi, On sets of integers containing no $k$ elements in arithmetic progression, Acta Arithmetica 27 (1975), 199–245. The case $k=4$ was proved earlier: Endre Szemerédi, On sets of integers containing no four elements in arithmetic progression, Acta Math. Acad. Sci. Hungar. 20 (1969), 89–104. (Of course, a "sparse" set contains no arithmetic progression of length $4.$)
Combinatorics (Venn diagram problem)
Denote H - onlu with horn; T - only with tail, F - onlu with Fur. HT - with horn and tail only, and so far. Z - without horn, tail, fur (pure animals). All the sets H,F,T, FH, FT,HT,FHT, Z are separated (drawn by different colors). Then: H + HT + FH + FHT = 12.000; T + HT + FT + FHT = 15.000; Z + H + HT + T = 20.000; HT + FHT = 8.000; FT + FHT = 6.000; FH + FHT = 5.000; FHT = 1.000. Hence, (without zeroes): HT = 8 - 1 = 7; FT = 6 - 1 = 5; FH = 5 - 1 = 4; H = 12 - 1 - 7 - 4 = 0; (if someone has horn, then must have tail or fur); T = 15 - 1 - 7 - 5 = 2; Z = 20 - 7 - 0- 2 = 11; Hmm, population of Xprom is undefined... It is unknown how many persons live with fur only. All I can tell that population is $\ge 30.000$.
Integrating differentials
There is no intermediate step: by the hypothesis, $\text{d}v = a \text{d}t$, and both integrals $$ \int_{v_0}^{v_f} \text{d}v = \int_{t_0}^{t_f} a \, \text{d}t $$ indicate the same path from the point where $v = v_0$ and $t = t_0$ to the point where $v = v_f$ and $t = t_f$ and have the same integrand, and so therefore are the same integral. (because we're in one dimension, only the two endpoints of the path matter) When working in the style of notation where you work with variables that depend functionally on one another rather than working with functions, the differential form becomes the correct way to formulate things. A vague, heuristic definition is that a differential form is simply something you integrate along a path ('real' definitions generally don't come until you start studying differential geometry, which is unfortunately much more complicated than you actually need to work with the issue at hand). So if $\omega$ is a differential form, then $\int_a^b \omega$ is something meaningful (assuming there is enough context to infer what path $a$ and $b$ indicate). However, $\int_a^b v$ doesn't make sense! Contrast the above with the way integrals are formulated in the alternative style where one works with functions, where writing $\int_a^b f$ does make sense. The connection between the two approaches to integral calculus is through identity $$ \int_a^b f = \int_a^b f(x) \, \text{d}x $$ There is a common abuse of notation $v=v(t)$ that conflates the two styles which can sometimes create some confusion here; when this abuse is present, you have to take some extra care to infer what is really meant.
Whether a prime number $ p $ can be written in the form $ 3A + 2B $, where $ A,B \in \mathbb{N} $.
This is an application of the Frobenius Coin Problem, whose solution is given by the following theorem. Theorem Suppose that $ m,n \in \mathbb{N} $ satisfy $ \gcd(m,n) = 1 $. Then $ mn - m - n $ is the largest integer that cannot be written as $ ma + nb $, where $ a,b \in \mathbb{N}_{0} $. Observe that $ 3A + 2B = 3(A - 1) + 2(B - 1) + 5 $. As $ \gcd(3,2) = 1 $, the theorem yields the following statements: Any integer $ > 3 \cdot 2 - 3 - 2 = 1 $ can be written as $ 3a + 2b $, where $ a,b \in \mathbb{N}_{0} $. Any integer $ > 1 $ can thus be written as $ 3(A - 1) + 2(B - 1) $, where $ A,B \in \mathbb{N} $. Therefore, any integer $ > 1 + 5 = 6 $ can be written as $ 3A + 2B $, where $ A,B \in \mathbb{N} $. Notice that the numbers $ 1 $, $ 2 $, $ 3 $, $ 4 $ and $ 6 $ cannot be put in the required form, whereas $ 5 = 3 \cdot 1 + 2 \cdot 1 $ can. Therefore, the answer to your problem is ‘all prime numbers $ \geq 5 $’.
$3$-manifold has same homology groups as a $3$-sphere.
Since $M$ is orientable, without boundary and connected, we have $H_0(M;\mathbb{Z}) = H_3(M;\mathbb{Z}) = \mathbb{Z}$. Using Poincaré duality, you know that the torsion subgroup of $H_2(M;\mathbb{Z})$ is isomorphic to the torsion subgroup of $H_{3-2-1}(M;\mathbb{Z}) = H_0(M;\mathbb{Z}) = \mathbb{Z}$ and thus is zero. Using Poincaré duality again, you know that the free part of $H_2(M;\mathbb{Z})$ is isomorphic to the free part of $H_{3-2}(M;\mathbb{Z}) = H_1(M;\mathbb{Z}) = 0$ and thus $H_2(M;\mathbb{Z}) = 0$. This shows that $M$ has the homology groups of a three dimensional sphere.
Solve in positive integers $x^2+3y^2=z^2$ and $x^2+y^2=5z^2$
People are not careful with these. With $x^2 + 3 y^2 = z^2;$ with odd $z,$ we indeed get $$ (r^2 - 3 s^2)^2 + 3(2rs)^2 = (r^2 + 3 s^2)^2. $$ This does not give primitive solutions with even $z,$ which come from $$ \left( \frac{r^2 - 3 s^2}{2} \right)^2 + 3 (rs)^2 = \left( \frac{r^2 + 3 s^2}{2} \right)^2 $$ with both $r,s$ odd For $x^2 + y^2 = 5 z^2$ primitive means $z$ odd, we take $x$ to be the even one. $$ (2u^2 - 2 u v - 2 v^2)^2 + (u^2 + 4 uv - v^2)^2 = 5 (u^2 + v^2)^2 $$
prove $\bigcup_{n=1}^\infty A_n=\bigcup_{n=1}^\infty B_n$
A start: The right-hand side is clearly a subset of the left-hand side, because $B_n\subseteq A_n$. To show that the left-hand side is a subset of the right-hand side, pick an element $x$ of the left-hand side. Then there is a smallest $n$ such that $x\in A_n$. Use this to show $x$ is in the RHS.
Given the y-values of points sampled at a constant angle on a circle with unknown center and radius, find corresponding x-coordinates
There is some ambiguity in the problem statement. You say the circles are tangent at the first $y$ value, which therefore is also the $y$ coordinate of the center of each circle. But in your worked example, the first $y$ value is not the $y$ value of the center of the circle. In fact the circles in that solution would not be tangent to the $y$ axis or each other, but would intersect twice. I did not assume that the circles are tangent at the first $y$ value. I assumed only a sequence of $y$ values of equally-spaced points along the circle that may or may not include the tangent point. Consider four consecutive points $(x_1,y_1),$ $(x_2,y_2),$ $(x_3,y_3),$ and $(x_4,y_4),$ where initially only the $y$ values are known, with $y_1 < y_2 < y_3 < y_4.$ Choose points such that $y_4 - y_3 \neq y_2 - y_1,$ since otherwise the solution is not determined. Since the central angles are equal, the distances between consecutive pairs of points are the same, and likewise the squares of the distances are the same, that is, $$ (x_2 - x_1)^2 + (y_2 - y_1)^2 = (x_3 - x_2)^2 + (y_3 - y_2)^2 = (x_4 - x_3)^2 + (y_4 - y_3)^2. $$ Let \begin{align} a &= \tfrac12(y_2 - y_1 - y_4 + y_3),\\ b &= \tfrac12(y_3 - y_2),\\ c &= \tfrac12(y_4 - y_1),\\ t &= \tfrac12(x_4 - x_3 - x_2 + x_1),\\ u &= \tfrac12(x_3 - x_2),\\ v &= \tfrac12(x_4 - x_1). \end{align} Then $a,$ $b,$ and $c$ are known, whereas $t,$ $u,$ and $v$ are initially unknown. We have the following facts: \begin{align} x_2 - x_1 &= v - u - t, & y_2 - y_1 &= c - b + a,\\ x_3 - x_2 &= 2u, & y_3 - y_2 &= 2b,\\ x_4 - x_3 &= v - u + t, & y_4 - y_3 &= c - b - a. \end{align} Therefore $$ (v - u - t)^2 + (c - b + a)^2 = (2u)^2 + (2b)^2 = (v - u + t)^2 + (c - b - a)^2.\tag1 $$ Let $(x_m,y_m) = \left(\tfrac12(x_2+x_3), \tfrac12(y_2+y_3)\right)$ and $(x_n,y_n) = \left(\tfrac12(x_1+x_4), \tfrac12(y_1+y_4)\right).$ That is, $(x_m,y_m)$ is the midpoint of the chord from $(x_2,y_2)$ to $(x_3,y_3)$ and $(x_n,y_n)$ is the midpoint of the chord from $(x_1,y_1)$ to $(x_4,y_4).$ By symmetry of the trapezoid with vertices $(x_1,y_1),$ $(x_2,y_2),$ $(x_3,y_3),$ and $(x_4,y_4),$ the segment from $(x_m,y_m)$ to $(x_n,y_n)$ is perpendicular to the edge from $(x_2,y_2)$ to$(x_3,y_3).$ So $$\frac{x_3 - x_2}{y_3 - y_2} = -\frac{y_m - y_n}{x_m - x_n}. \tag2$$ (The condition $y_4 - y_3 \neq y_2 - y_1$ implies that neither the top nor bottom of either ratio is zero.) But $y_m - y_n = a$ and $x_m - x_n = -t,$ so Equation $(2)$ can be rewritten $\frac bu = \frac ta,$ which implies that $$ u = \frac{ab}{t}. \tag3$$ The chord from $(x_1,y_1)$ to $(x_4,y_4)$ is parallel to the edge from $(x_2,y_2)$ to$(x_3,y_3),$ which implies that $\frac cv = \frac bu = \frac ta,$ so $v = \frac{ac}{t}$ and therefore $$ v - u = \frac{a(c - b)}{t}. \tag4 $$ Use Equations $(3)$ and $(4)$ to substitute for $u$ and for $v - u$ in Equation $(1)$. We can just look at the first equality, since symmetry ensures that the second equality will be true if the first one is true. So we have $$ \left(\frac{a(c - b)}{t} - t\right)^2 + (a + c - b)^2 = 4\left(b^2 + \frac{ab}{t}\right) . $$ This is equivalent to $$ t^4 + (a^2 + (c - b)^2 - 4b^2)t^2 - 4abt + a^2(c - b)^2 = 0. $$ Solve for $t.$ This is a quartic, so in principle it is solvable by radicals, but I would just do it numerically in practice. Once you have $t$ you can find $u$ and $v$ easily. Depending on the problem statement, it might take some additional work to set the $x$ coordinates so that the circle is tangent to the $y$ axis. Note that the way I interpreted the problem, three $y$ values would not be enough. If there is a circle that passes through three equally spaced points with given $y$ coordinates and satisfies the other conditions, you can find another circle with a slightly larger or smaller radius that also will have equally spaced points with the given $y$ coordinates and that also will satisfy the other conditions. So you really do need four points under that interpretation. Knowing that the first $y$ value is the tangent point, I think three $y$ values would be enough. One approach would be to label your first three $y$ values $y_2,$ $y_3,$ and $y_4,$ then set $y_1 = 2y_2 - y_3$ and proceed with the solution given above.
von Neumann stability analysis for irregular meshes
Sorry to bring this question back from the grave. I'm assuming you've found your answer by now, but others may be interested. In general, the time step for finite difference methods (also finite element methods) is limited by the smallest mesh size in your problem. So you carry out your von-Neumann stability analysis as usual, but use the smallest $h$ in your timestep calculation.
Max and min with Lagrange multipliers, question about getting a single extrema point back
You properly applied Lagrange multipliers and found the two extrema points but none of these points can be a maximum. Suppose the same problem with contraint $xy=a\implies y=\frac a x$; so the function is $$f(x)=x^2+\frac{a^2}{x^2}\implies f'(x)=2 x-\frac{2 a^2}{x^3} \implies f''(x)=2+\frac{6 a^2}{x^4}>0$$ So,if $a>0$, in the real domain the derivative cancels for $x=\pm \sqrt a$ and $f(\pm \sqrt a)=1+a^2$. The second derivative being always positive, then two minimum points.
Evaluate $ \sum_{n=1}^{\infty} \frac{\sin \ n}{ n } $ using the fourier series
$\newcommand{\+}{^{\dagger}}% \newcommand{\angles}[1]{\left\langle #1 \right\rangle}% \newcommand{\braces}[1]{\left\lbrace #1 \right\rbrace}% \newcommand{\bracks}[1]{\left\lbrack #1 \right\rbrack}% \newcommand{\ceil}[1]{\,\left\lceil #1 \right\rceil\,}% \newcommand{\dd}{{\rm d}}% \newcommand{\ds}[1]{\displaystyle{#1}}% \newcommand{\equalby}[1]{{#1 \atop {= \atop \vphantom{\huge A}}}}% \newcommand{\expo}[1]{\,{\rm e}^{#1}\,}% \newcommand{\fermi}{\,{\rm f}}% \newcommand{\floor}[1]{\,\left\lfloor #1 \right\rfloor\,}% \newcommand{\half}{{1 \over 2}}% \newcommand{\ic}{{\rm i}}% \newcommand{\iff}{\Longleftrightarrow} \newcommand{\imp}{\Longrightarrow}% \newcommand{\isdiv}{\,\left.\right\vert\,}% \newcommand{\ket}[1]{\left\vert #1\right\rangle}% \newcommand{\ol}[1]{\overline{#1}}% \newcommand{\pars}[1]{\left( #1 \right)}% \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\pp}{{\cal P}}% \newcommand{\root}[2][]{\,\sqrt[#1]{\,#2\,}\,}% \newcommand{\sech}{\,{\rm sech}}% \newcommand{\sgn}{\,{\rm sgn}}% \newcommand{\totald}[3][]{\frac{{\rm d}^{#1} #2}{{\rm d} #3^{#1}}} \newcommand{\ul}[1]{\underline{#1}}% \newcommand{\verts}[1]{\left\vert\, #1 \,\right\vert}$ $\ds{\sum_{n = 1}^{\infty}{\sin\pars{n} \over n} = \half\pars{\,\sum_{n = -\infty}^{\infty}{\sin\pars{n} \over n} - 1}.\quad}$ See $\large\tt details$ over here . \begin{align} \sum_{n = -\infty}^{\infty}{\sin\pars{n} \over n}&= \int_{-\infty}^{\infty}{\sin{x} \over x}\sum_{n = -\infty}^{\infty}\expo{2n\pi x\ic} \,\dd x = \int_{-\infty}^{\infty}\half\int_{-1}^{1}\expo{\ic kx}\,\dd k \sum_{n = -\infty}^{\infty}\expo{-2n\pi x\ic}\,\dd x \\[3mm]&= \pi\sum_{n = -\infty}^{\infty}\int_{-1}^{1}\dd k \int_{-\infty}^{\infty}\expo{\ic\pars{k - 2n\pi}x}\,{\dd x \over 2\pi} = \pi\sum_{n = -\infty}^{\infty}\int_{-1}^{1}\delta\pars{k - 2n\pi}\,\dd k \\[3mm]&= \pi\sum_{n = -\infty}^{\infty}\Theta\pars{{1 \over 2\pi} - \verts{n}} = \pi\,\Theta\pars{1 \over 2\pi} = \pi \end{align} Then, $$\color{#0000ff}{\large% \sum_{n = 1}^{\infty}{\sin\pars{n} \over n} = \half\pars{\pi - 1}} $$
Is there always isomorphism between two sets that have the same cardinality?
By definition there is always a bijection between two sets of the same cardinality, Otherwise they wouldn't have the same cardinality. However, calling it an "isomorphism" suggests that you're looking for a bijection that preserves some kind of additional structure, and until you tell us which kind of additional structure you want preserved, the question can't really be answered. For most practically occurring cases, I think the answer will be "no, there exist non-isomorphic such-and-suches with the same cardinality". But there are pathological corner cases where this is not the case, for example if we consider isomorphisms between standard models of the pure predicate calculus with equality over the empty language. Or, somewhat less trivially: For any finite field $K$, if two vector spaces over $K$ have the same cardinality, then they're isomorphic as $K$-vector spaces.
Linear Algebra,Conjugate Transpose
You need to prove that $ \mathrm{Tr}(A^*A) \ge 0; $ $\mathrm{Tr}(A^*A) = 0 \iff A = 0; $ $ \mathrm{Tr}(A^*(B + C)) = \mathrm{Tr}(A^*B) + \mathrm{Tr}(A^*C); $ $ \mathrm{Tr}(A^*(zB)) = z\,\mathrm{Tr}(A^*B) $ $ \mathrm{Tr}(A^*B) = \left(\mathrm{Tr}(B^*A)\right)^* $ For (1) just notice that $ A^*A $ is positive semidefinite, thus has non negative eigenvalues and non negative trace; also, if the trace is zero all the eigenvalues need to be zero, and thus $ A^*A = 0 $, which is only possible if $ A = 0 $. Items (3) and (4) are trivially verified because the trace is linear. For (5), just write out the trace explicitly, apply the linearity of complex conjugation and you get the thesis.
How to take the derivative of the square root of a quadratic form
We have $$\sigma_p^2=\sum^n_{i=1}w_i\sum^n_{j=1}\Sigma_{ij}w_j.$$ Differentiating with respect to $w_k$ gives $$\frac{\partial \sigma_p^2}{\partial w_k}=\sum^n_{i=1}\frac{\partial w_i}{\partial w_k}\sum^n_{j=1}\Sigma_{ij} w_j+\sum^n_{i=1} w_i\sum^n_{j=1}\Sigma_{ij} \frac{\partial w_j}{\partial w_k}$$ $$=\sum^n_{j=1}\Sigma_{kj} w_j+\sum^n_{i=1} w_i \Sigma_{ik}\qquad\qquad\quad $$ $$=2\sum^n_{j=1}\Sigma_{kj} w_j\qquad\qquad\qquad\qquad\qquad$$ where the last line follows from symmetry of $\Sigma$. Thus $$\frac{\partial \sigma_p}{\partial w_k}=\frac{1}{2\sigma_p}\frac{\partial \sigma_p^2}{\partial w_k}=\frac{\sum^n_{j=1}\Sigma_{kj} w_j}{\sqrt{w' \Sigma w}}.\tag{1}$$ Now, using the property of covariance: $\text{cov}(X,aY)=a\text{cov}(x,y)$, we know that $$\Sigma_{kj}w_j=w_j\text{cov}(r_k,r_j)=\text{cov}(r_k,w_jr_j).$$ Thus $$\frac{\partial \sigma_p}{\partial w_k}=\frac{\sum^n_{j=1}\text{cov}(r_k,w_jr_j)}{\sqrt{w' \Sigma w}}=\frac{\text{cov}(r_k,R'w)}{\sqrt{w' \Sigma w}}=\frac{\text{cov}(r_k,r_p)}{\sigma_p}$$ where again we have a used a property of covariance: $\text{cov}(X,Y+Z)=\text{cov}(X,Y)+\text{cov}(X,Z).$ Using matrices, you should have got $$\frac{\partial \sigma_p}{\partial w} = (\Sigma w) (w' \Sigma w)^{-1/2}$$ which is equation $(1)$. Then $$(\Sigma w)_k=\text{cov}(r_k,w_jr_j)$$ and so $$\Sigma w=\text{cov}(r_k,R'w)=\text{cov}(r_k,r_p),$$ and the result follows. (It looks like there is a mistake in the expression you were trying to obtain.)
Function returning number of subsets of size $k$ of a set of size $n$.
Actually, the binomial coefficient works in this case since for $k>n$ we have one factor in the numerator which equals zero: $$\binom{n}{k} = \frac{n(n-1)\cdots\overbrace{(n-n)}^{=0}\cdots(n-k+1)}{k!}=0$$
Prove that the set $a\Bbb Z+b\Bbb Z$ is dense in $\Bbb R$ if $\frac ab$ is an irrational number.
Let $r=\inf\{\,x\in a\Bbb Z+b\Bbb Z\mid x>0\,\}$. Clearly, $0\le r\le a<\infty$. Then either $r\in a\Bbb Z+b\Bbb Z$ or there exists a strictly decreasing sequence in $a\Bbb Z+b\Bbb Z$ converging to $r$. Assume there are $n_1,n_2,m_1,m_2$ with $r<an_1+bm_1<an_2+bm_2<2r$. Then $0<a(n_2-n_1)+b(m_2-m_1)<r$, contradicting the definition of $r$ as infimum. Hence for $r>0$, the case of a strictly decreasing sequence in $a\Bbb Z+b\Bbb Z$ converging to $r$ is not possible. We conclude that either $r=0$ or $0<r\in a\Bbb Z+b\Bbb Z$. Assume $0<r\in a\Bbb Z+b\Bbb Z$, say $r=an+bm$. If $an'+bm'\notin r\Bbb Z$, then there exists $k\in\Bbb Z$ with $kr<an'+bm'<(k+1)r$, so $0<a(n'-kn)+b(m'-km)<r$, again contradicting the definition of $r$ as infimum. We conclude $r=0$. Then for every $\epsilon>0$, there exists $x\in a\Bbb Z+b\Bbb Z$ with $0<x<\epsilon$. Now if $(u,v)\subset \Bbb R$ is any open interval, pick $x\in a\Bbb Z+b\Bbb Z$ with $0<x<v-u$. Then $x\Bbb Z$ intersects $(u,v)$ as was to be shown.
Show that the function satisfies the given partial differential equation.
You should have taken a look at Euler's Homogeneous Function Theorem But you can do this $$\frac{x^2+y^2}{\sqrt{x^2+y^2}}=\frac{\sqrt{x^2+y^2}\cdot\sqrt{x^2+y^2}}{\sqrt{x^2+y^2}}=\sqrt{x^2+y^2}$$ $$\text{or}$$ $$\frac{x^2+y^2}{\sqrt{x^2+y^2}}=\frac{x^2+y^2}{\sqrt{x^2+y^2}}\cdot\frac{\sqrt{x^2+y^2}}{\sqrt{x^2+y^2}}= \frac{x^2+y^2}{x^2+y^2}(\sqrt{x^2+y^2}) =\sqrt{x^2+y^2}$$ Note: I'm assuming that $x\neq0 \text{ and } y\neq0$
Open covers for $\mathbb{R}$
I don't think so. Separate the index set $\mathbb Z^+$ into two parts: $N_1 = \{ n \in \mathbb Z^+ : n = 2^k\text{ for some } k \in \mathbb Z^+\}$ and $N_2 = \mathbb Z^+ \setminus N_1$. Enumerate the rational numbers as follows: if $n \in N_2$ let $q_n = n$, and let $\{q_n\}_{n \in N_1}$ be an arbitrary enumeration of $\mathbb Q \setminus N_2$. Let $\epsilon_n = \frac 1n$ for all $n$. The intervals $(n-\frac 1 n,n+ \frac 1n)$, $n \in N_2$, fail to cover a portion of the line containing intervals of infinite total length. On the other hand, the intervals $(q_n - \frac 1n, q_n + \frac 1n)$, $n \in N_1$, may be indexed as $(q_{2^k}-\frac 1{2^k},q_{2^k}+ \frac 1{2^k})$, $k \in \mathbb Z^+$, and thus have finite total length. This means they can't cover everything the first set of intervals missed.
a convergent sequence in a metric space is bounded (PROOF)
There is a fundamental flaw in that argument, in that it is not proving what the statement is all about. Let $(X,d)$ be the metric space. To prove that your sequence is bounded, you must find a $x \in X$ and a $r > 0$ such that for all $n \in \mathbb N$, $d(a_n, x) < r$ (or $\le r$, that is an equivalent statement). So bounding $d(a_n,a_m)$ is essentially pointless. If you go through the argument until $$ r \overset{def}= \max \{ 1, d(a,a_1),\cdots,d(a,a_{n_0-1}) \} $$ the argument is in fact complete (I used a different letter, $r$, to denote the radius, since $d$ is already used for the metric), since $d(a,a_n) \le r$ for all $n \in \mathbb N$ by definition of $r$! So the sequence is bounded (pick $a$ for the center and $r$ for the radius). Hope that helps,
What is the base and dim for the kernel of this linear transformation
Looks correct. Also you can notice that a second degree polinomial can only have two roots. The kernel here requires the polinomial to have three roots, and that is impossible unless the polinomial equals zero for all the values of x.
When do we say the uncountable sum $\sum \alpha_ie_i$ of vectors in non-separable Hilbert space is convergent?
The sum $\sum_{i\in I}\alpha_ie_i$ converges if the set $$\left\{\sum_{j\in J}\alpha_je_j:J\subset I,J\text{ finite}\right\}$$ has exactly one accumulation point.
Isolated singularites of $f(z)=\frac{\log(1+z)}{z^2}$?
The points are no isolated singularities since it is a complete half line that must be removed. Hence you can't use the classification "removable/polar/essential" singularities for them. The usuel denomination is branch point. We say that $]-\infty,-1]$ is the principal branch cut (since it is determined by the principal argument) of $\log(1+z)$. See here for more informations : https://en.wikipedia.org/wiki/Branch_point.
$ (H, \alpha) $ is lie subgroup of $G$, $\exp(tY) \in \alpha(H)$, How to prove $Y \in \alpha_\ast (h)$?
The map $t\rightarrow \exp(tY)$ is a differentiable map defined on $\mathbb{R}$ and takes its values in $\alpha(H)$. This implies that its differential at $t$ is a map $T_t\mathbb{R}=\mathbb{R}\rightarrow \alpha(H)_{\exp(tY)}$ in particular if $t=0$ we have: ${d\over{dt}}_{t=0}\exp(tY)=Y$ is an element of $T_{Id}\alpha(H)=\alpha_*(h)$.
determining a and b so the function becomes differentiable
Hint Once $2x+3$ is a line, if you want $f$ to be differentiable at $1$ then $2x+3$ must to be tangent to $x^3+ax^2+b$ at $1$, so: $$f'(x)=(x^3+ax^2+b)'=3x^2+2ax=(2x+3)'=2\\ f'(1)=2=3+2a \rightarrow a=-\frac{1}{2}$$ That give us $b=4-a=\frac{9}{2}$. Can you finish?
Number of perfect squares less than N?
Hint: It will be equal to the floor function of $\sqrt N$
Prove a step in exercise 1.19 Brezis's Functional Analysis
Since $f(x) \le \|f\|\cdot \|x\|$, the inequality "$\le$" follows. Take $\epsilon>0$. Then there is $x_\epsilon$ with $\|x_\epsilon\|=1$ and $f(x) \ge \|f\|-\epsilon$. Then for $t>0$ $$ \sup_{x\in E}(f(x)-\phi(x))\ge f(tx_\epsilon) - \phi(t) \ge t (\|f\|-\epsilon) - \phi(t). $$ This holds for all $\epsilon>0$, hence $$\sup_{x\in E}(f(x)-\phi(x))\ge t \|f\| - \phi(t). $$ Taking the supremum over $t>0$ on the right hand side yields the claim.
Integrating $\frac{\log (1-x)}{x}$
$$\frac{\log(1-x)}x=-\frac{(x+x^2/2+x^3/3+x^4/4+...)}x=-(1+x/2+x^2/3+x^3/4+...)$$ Now: $$\int\frac{\log(1-x)}x{\rm d}x=-(x+x^2/2^2+x^3/3^3+...)=-{\rm Li}_2(x)$$ Now: $$\int_0^{1/2}\frac{\log(1-x)}x{\rm d}x={\rm Li}_2(0)-{\rm Li}_2(1/2)=\frac{\log^22}2-\frac{\pi^2}{12}$$ since ${\rm Li}_2(0)=0$ and by duplication formula: $$\mathrm{Li}_2(x)+\mathrm{Li}_2(1-x) =\frac{\pi^2}{6}-\log(x)\log(1-x)$$ Put $x=1/2$: $$2\mathrm{Li}_2(1/2)=\frac{\pi^2}{6}-\log(1/2)\log(1/2)\implies \mathrm{Li}_2(1/2)=\frac{\pi^2}{12}-\frac12\log^2(2)$$
(Q,+) and (C*,+) has no finite index subgroup
There is a nice exercise telling that if $G$ is a divisible group then every non-zero quotient group of it is infinite and vice-versa. Assuming you know the divisibility for abelian groups, you can apply this fact.
Is it true that for every connected matroid $M$ we have $\chi(M)=\chi(M^*)$?
This is not true. Consider a cycle graph $C_n$ and the corresponding matroid $M$. Then $\chi(M) = 2$. The dual is the matroid corresponding to the graph with two vertices and $n$ parallel edges between these vertices. It follows that $\chi(M^*) = n$. More generally, if $M = U^r_n$ is a uniform matroid of rank $r$, then $\chi(M) = \lceil n / r\rceil$, while for $M^* = U_n^{n-r}$ we have $\chi(M^*) = \lceil n / (n-r) \rceil$, and these are not equal in general.
Transformation of uniformly distributed random number
Let random variable $X$ be uniformly distributed on the interval $[0,1]$. We will find constants $a$ and $b$ such that the random variable $aX+b$ is uniformly distributed on $[-11,17]$. The intuition is that if we find $a$ and $b$ such that $(a)(0)+b=-11$, and $(a)(1)+b=17$, that should do the job. Solving these two equations for $a$ and $b$, we find that $b=-11$ and $a=28$. Intuition is perhaps not enough. Let us show that the random variable $Y$, where $Y=28X-11$, is indeed uniformly distributed in the interval $[-11,17]$. Note that $Y$ is just a scaling of $X$ followed by a shift. That should preserve uniform distribution. So we want to prove that for any $y$ between $-11$ and $17$, we have $\Pr(Y\le y)=\frac{y-(-11)}{28}$, for this is what uniform distribution on $[-11,17]$ means. For such a $y$, we have $$\Pr(Y\le y)=\Pr(28X-11\le y)=\Pr(28X\le y-(-11))=\Pr\left(X\le \frac{y-(-11)}{28}\right).$$ It is not hard to check that $\frac{y-(-11)}{28}$ lies between $0$ and $1$. Since $X$ is uniformly distributed over this interval, it follows that $$\Pr\left(X\le \frac{y-(-11)}{28}\right)=\frac{y-(-11)}{28},$$ which is what we wanted to prove. Alternately, we have shown that for $y$ between $-11$ and $17$, $F_Y(y)$, the cumulative distribution function of $Y$, is equal to $\frac{y-(-11)}{28}$. Differentiating, we find that between $-11$ and $17$, the density function of $Y$ is $\frac{1}{28}$.
Common prime divisors of 2 integers
Proof is simple. Let g be the gcd of a and b. So we can express a and b as a=gc and b=gd. If d and c had any common prime factor p, then p*g would be the gcd, contradicting our definition of g as the gcd. QED the gcd of a,b contains ALL the common factors of a,b.
What does $L^2((1+|\xi|^2)^sd\xi)$ mean?
This is the usual $L^2$ space with respect to the measure whose density / Radon-Nikodym derivative is given by $$\frac{d\mu}{d\xi} = (1 + |\xi|^2)^s$$ That is, it's the set of functions for which $$\int |f(\xi)|^2 (1 + |\xi|^2)^s \, d\xi < \infty$$ This is useful to study because these functions must satisfy much more stringent decay conditions at infinity than your usual $L^2(dx)$ functions. For example, $|f(\xi)| \lesssim |\xi|^{-1}$ isn't enough to guarantee that $f \in L^2(d\mu)$, because the integrand could still blow up like $|\xi|^{2s - 1}$ at infinity. But Schwartz functions lie in this space for all $s$.
How to create number six using three zeroes?
$6=\left(\cos(0)+\cos(0)+\cos(0)\right)!$
$P(x) \sim_{ n \rightarrow \infty} x^{n}$
Best way to check is to evaluate the limit $$\lim_{n \to \infty} \frac{P(x)}{x^n}=a$$ If $a=1$, then the sign $\sim$ is used. In your question, the limit isn't necessarily $1$, instead it may be finitely greater than $1$.It may even not exist at all, depending on the coefficients. So, according to the given Information, we can't say anything for certain between the asymptotic relation of the polynomial and $x^n$.
How to expand $\mid\mu_1\overline{\nu}_1 + \mu_2\overline{\nu}_2\mid^2$ where $\mu_i$ and $\nu_i$ are complex inner products for $i=1,2$
Because\begin{align}\bigl|\mu_1\overline{\nu_1}+\mu_2\overline{\nu_2}\bigr|^2&=\bigl(\mu_1\overline{\nu_1}+\mu_2\overline{\nu_2}\bigr)\overline{\bigl(\mu_1\overline{\nu_1}+\mu_2\overline{\nu_2}\bigr)}\\&=\bigl(\mu_1\overline{\nu_1}+\mu_2\overline{\nu_2}\bigr)\bigl(\overline{\mu_1}\nu_1+\overline{\mu_2}\nu_2\bigr)\\&=\bigl|\mu_1\nu_1\bigr|^2+\bigl|\mu_2\nu_2\bigr|^2+\mu_1\overline{\nu_1}\,\overline{\mu_2}\nu_2+\overline{\mu_1\overline{\nu_1}\,\overline{\mu_2}\nu_2}\\&=\bigl|\mu_1\nu_1\bigr|^2+\bigl|\mu_2\nu_2\bigr|^2+2\operatorname{Re}\left(\mu_1\overline{\nu_1}\,\overline{\mu_2}\nu_2\right).\end{align}
How do I solve an equation involving $e^t$ and $t$ on one side? $18=0.5e^t-0.5t-0.5$
Newtons method generally works but its also kind of boring. Since you are in a dynamics class you should be familiar with the concept of a fixed point. In this case we will $``$invent$"$ a sequence for which the solution of this equation is a fixed point. Define $y(t) = 0.5e^t-0.5t-0.5$ and let $t_c$ be a number such that $y(t_c)=18$. Some simple estimations, $$y(1) = 0.5 e^{1} -0.5(1)-0.5 \approx 0.5(3)-1 = 0.5 $$ $$ y(3)= 0.5e^3 -0.5(3)-0.5 \approx 0.5(27)-2=16.5 $$ So we expect the intersection to be just past $t=3$, this is what we will use as our seed value later. I used these values to make a very rough sketch of the graph which can be seen below. Now we will make a sequence which converges to the solution. One way is to make the recurrence shown below where $a(t_c)=0$ when the solution is reached. Notice that it is important that a(t) be positive when $t<t_c$ and negative when $t>t_c$ otherwise the fixed point won't be an attractor. $$t_{n+1} = t_n + a(t_n)$$ There are many choices for $a(t)$. A few options are listed below. $a(t)=18-y(t)$ $a(t) = \tan^{-1}(18-y(t))$ $a(t) = 1-y(t)/18$ Another possible sequence to try is one which multiplies the old values of $t$. Looking at the sequence below we can see that $b(t_c)=1$. It should be clear that for $t_c$ to be an attractor we must have that $b(t)$ is greater than 1 when $t$ is less than $t_c$, and less than $1$ when $t$ is greater than $t_c$. $$ t_{n+1} = t_n b(t_n) $$ Some possible choices here are , $b(t) = 18/y(t)$ $b(t) = \frac{4}{\pi} \tan^{-1}(18/y(t))$ $b(t) = \frac{1}{2-18/y(t)}$ Try playing around with these and see if you can invent your own. Implementing the first example for $a(t)=18-y(t)$ one finds that with a seed value of $t=3$ the sequence quickly diverges. When you run into this you need to find some way of keeping $a(t)$ from being too large near the intersection. One way is to use the inverse tangent function. In this case it is as easy as dividing our choice by 18. Using $t_0=3$ and $a(t)=1-y(t)/18$ I get, $t_0=3 $ $t_1=3.553\dots$ $t_2=3.7095 \dots$ $t_3=3.70605\dots$ $\vdots$ $t_{10}=3.706384959$ Which is the limit of the precision of my calculator.
Numbers relative to their sum of Divisors
These notes derive (equation ($4$) on p. $4$) $$ \sum_{k\le n}\frac{\sigma(k)}k=\frac{\pi^2}6n+O(\log n)\;. $$ Thus for your ratio $D(n)$ we have $$ \sum_{k\le n}\frac{\sigma(k)-k-1}k=\left(\frac{\pi^2}6-1\right)n+O(\log n)\;, $$ and dividing by $n$ shows that the average converges to $$ \frac{\pi^2}6-1\approx0.645\;. $$
Projection maps on (fibre) bundles
The relationship between submersions and fibrations comes from Ehresmann's fibration theorem which asserts that a proper submersion between smooth manifolds is a fibration. Wikipedia is being somewhat imprecise; you need properness or else there are counterexamples. I'm not sure what distinction you're making between fiber bundles and bundles. The smooth function $$f : \mathbb{R} \ni x \mapsto x^3 \in \mathbb{R}$$ is surjective (even bijective) but fails to be a submersion at $x = 0$.
What is a filled rectangle called, if anything?
Yes! It is called a 2-cell; in general, a k-cell is a closed set of points (meaning that the boundaries are included) in k dimensions with "straight" boundaries. So a 3-cell is a filled in rectangular prism, a 1-cell is a closed interval, [a,b], and a 0-cell is a point.
About the proof that $\mathbb{Z_{3}} \times \mathbb{Z_{3}}$ is not cyclic
In a cyclic group of order $9$ there must be an element of order $9$ by definition of cyclic group. But in $\mathbb{Z_{3}} \times \mathbb{Z_{3}}$ the order of each element is $1$ or $3$, because for every $(a,b) \in \mathbb{Z_{3}} \times \mathbb{Z_{3}}$ we have $$3\cdot (a,b) = (3a,3b) = (0,0) = 0_{\mathbb{Z_{3}} \times \mathbb{Z_{3}}}$$ So $\mathbb{Z_{3}} \times \mathbb{Z_{3}}$ isn't cyclic.
Does such Holomorphic function exist on open unit disc?
Hint: Use $\displaystyle g(z)=\sin(\frac{\pi}{1-z})$
Help Showing that the Adjoint Operator $T^*$ is Surjective if and only if $T$ is Injective
Suppose $T^*$ exists and is surjective. Let $\mathbf{v} \in \ker(T)$. Since $T^*$ is surjective, there exists $\mathbf{w} \in W$ such that $T^*(\mathbf{w}) = \mathbf{v}$. Then $$\langle \mathbf{v}, \mathbf{v} \rangle = \langle \mathbf{v}, T^*(\mathbf{w}) \rangle = \langle T(\mathbf{v}), \mathbf{w} \rangle = \langle \mathbf{0}_W, \mathbf{w} \rangle = 0$$ This only happens when $\mathbf{v} = \mathbf{0}_V$. Hence, $\ker(T) = \{\mathbf{0}_V\}$ and $T$ is injective. The converse is not true in general, here is a counter example. Let $V = W = l^2$, the space of square-summable sequences. Define a linear operator $T$ such that $$T((a_n)) = (a_n - a_{n+1}),$$ for all $(a_n) \in V$. (Identity minus left-shift) $T$ is injective. (The only constant, square-summable sequence is the zero-sequence.) $T^*$ exists (Identity minus right-shift) $T^*$ is not surjective. (The sequence $(1,0,0,\ldots) \in V$ is not in $\mathcal{R}(T^*)$)
Tangent map of the inclusion map of a submanifold
The inclusion map is given by the identity (if you see $F_t$ as a subspace of $M$), the tangent map $T\iota$ is the inclusion of the tangent space of $F_t$ in $TM$
Conditional mean on uncorrelated stochastic variable
$\operatorname{Cov}(X,Y)$ can be $0$ and the variables can still be dependent (exhibiting a purely non-linear dependence), and so $E(X\mid Y) \neq E(X)$. In narrower terms, "mean independence" implies zero covariance, but zero covariance does not necessarily imply mean-independence: if $E(X\mid Y) = E(X) \Rightarrow \operatorname{Cov}(X,Y) =0$ but not the reverse. As a simple example, consider the following situation: Let $Y$ be a random variable with $E(Y)=0,\,\, E(Y^3)=0$. Define another random variable $X = Y^2 + u,\,\, E(u\mid Y) =0$. Then $E(X\mid Y) = Y^2 \implies E(X) = E(Y^2) \neq 0$. Then $$\text{Cov}(X,Y) = E(XY) - E(X)E(Y) = E\Big[E(XY\mid Y)\Big] - E(Y^2)\cdot 0$$ $$=E\Big[YE(X\mid Y)\Big] = E(Y\cdot Y^2) = E(Y^3)=0.$$ So covariance is zero but $X$ is not mean-independent from $Y$.
Prove $\sin^3A-\cos^3A=\left(\sin^2A-\cos^2A\right)(1-2\sin^2A\cos^2A)$
With $x=\pi$ the LHS is equal to $1$ while the RHS is equal to $-1$ therefore the identity is false.
Sampling from Bayesian posterior without a tractable prior
You don't want to be sampling from the prior $P(A, B)$ but the posterior $P(A, B | C)$, by for example using $$ P(A, B | C ) \propto P(C | B) P(B | A) P(A), $$ in a Metropolis-Hastings or some other approach. Now assuming you can do this and you set up your Markov chain you will have samples $(A_i, B_i) \sim P(A, B | C)$ and you can then estimate any functional you like of the variable $B$ by just ignoring the samples of $A$, now sure that does seem somewhat inefficient, but that inefficiency is the price you have paid for avoiding the intractable integral and it is often a price worth paying! To clarify if you are interested in some statistic $f(B)$ then you can write it as a function of both variables, $f(a, b) = f(b)$ say, then you set up the Markov chain and then use your sample to estimate $$ \begin{align} \frac{1}{N}\sum_{i=1}^{N} f(A_i,B_i) \rightarrow &\int_\mathcal{B} \int_{\mathcal{A}} f(a, b) p(a,b|c)dadb \\ &=\int_\mathcal{B} f(b) \left( \int_{\mathcal{A}} p(a,b|c) da \right) db \\ &= \int_{\mathcal{B}} f(b) \, p(b|c)db = \mathbb{E}_{B \sim P(B|C)}\left[ f(B) \right]. \end{align} $$ Finally worth mentioning that the attractiveness of sampling methods is how easy they are to set up and produce some output - it can be more challenging to assess questions regarding convergence of that output, and there are alternatives to sampling methods with their own particular pros and cons.
Change of Basis in Canonical Correlation Analysis
The notation $\Sigma^{1/2}$ means the square root of the matrix $\Sigma$, which exists because your covariance matrix $\Sigma$ is symmetric positive definite, and thus $\Sigma = MDM^{-1}$ where $D$ is a diagonal matrix with all positive diagonal entries, so then $\Sigma^{1/2}$ is obtained by replacing each diagonal entry in $D$ with the square root of the entry. Since $\Sigma^{1/2}$ is a square matrix, the equations for $c$ and $d$ are indeed linear transformations. Then, in the last expression, the matrix $\Sigma^{-1/2}$ is the inverse of $\Sigma^{1/2}$. So you can verify e.g. that $\Sigma_{XX}^{-1/2}c = a$ and so the last expression is the same as the original expression in terms of $a$ and $b$.