title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
How can i find a inverse of a polynomial in a quotient ring?
Hint $\ \ \color{#c00}{x^2 = 2}\ \Rightarrow\ \dfrac{1}{1+\color{#c00}{x^2} x} \,=\, \dfrac{1}{1+\color{#c00}2x} = \dfrac{1}{1+2\color{#c00}x}\, \dfrac{1-2x}{1-2\color{#c00}x} = \dfrac{1\,-\,2x\ }{1-4(\color{#c00}2)}$ i.e. $ $ rationalize the denom of $\, \dfrac{1}{1+2\sqrt{2}}.\,$ More generally, use the Extended Euclidean Algorithm. For much further discussion see here and its links.
Cosets are only defined with $g\in G$ s.t. $g\notin H$?
Yeah, your professor is just wrong; you definitely want $H$ itself to be a coset for many reasons, e.g. for the purpose of defining quotient groups.
How to find this $\prod_{k=-\infty}^{+\infty}\frac{x^2+(4k+1-y)^2}{x^2+(4k+3-y)^2}$
Firstly, define the complex number $z=x+yi$. Then, we are asked to prove that $$ \prod_{k\in\mathbb{Z}}\left|\frac{z-(4k+1)i}{z-(4k+3)i}\right|^2=\frac{1+e^{\pi x}-2e^{\frac{\pi}{2}x}\sin{\dfrac{\pi}{2}y}}{1+e^{\pi x}+2e^{\frac{\pi}{2}x}\sin{\dfrac{\pi}{2}y}}. $$ Now, let's rewrite the right-hand side of this equality: $$ \frac{1+e^{\pi x}-2e^{\frac{\pi}{2}x}\sin{\dfrac{\pi}{2}y}}{1+e^{\pi x}+2e^{\frac{\pi}{2}x}\sin{\dfrac{\pi}{2}y}} = \frac{\left(e^{\frac{\pi}{2}x}-\sin\dfrac{\pi}{2}y\right)^2+\cos^2\dfrac{\pi}{2}y}{\left(e^{\frac{\pi}{2}x}+\sin\dfrac{\pi}{2}y\right)^2+\cos^2\dfrac{\pi}{2}y} = \left|\frac{e^{\frac{\pi}{2}x}+i\left(\cos\dfrac{\pi}{2}y+i\sin\dfrac{\pi}{2}y\right)}{e^{\frac{\pi}{2}x}-i\left(\cos\dfrac{\pi}{2}y+i\sin\dfrac{\pi}{2}y\right)}\right|^2= \\ =\left|\frac{e^{\frac{\pi}{2}x}+ie^{i\frac{\pi}{2}y}}{e^{\frac{\pi}{2}x}-ie^{i\frac{\pi}{2}y}}\right|^2=\left|\frac{e^{\frac{\pi}{2}(x-iy)}+i}{e^{\frac{\pi}{2}(x-iy)}-i}\right|^2 =\left|\frac{e^{\frac{\pi}{2}\overline{z}}+i}{e^{\frac{\pi}{2}\overline{z}}-i}\right|^2=\left|\frac{e^{\frac{\pi}{2}z}-i}{e^{\frac{\pi}{2}z}+i}\right|^2. $$ Thus, we need to prove that $$ \prod_{k\in\mathbb{Z}}\left|\frac{z-(4k+1)i}{z-(4k+3)i}\right|=\left|\frac{e^{\frac{\pi}{2}z}-i}{e^{\frac{\pi}{2}z}+i}\right|. $$ Let $w=z-i$, then the last equality can be rewritten as $$ \prod_{k\in\mathbb{Z}}\left|\frac{w-4ki}{w-(4k+2)i}\right|=\left|\frac{e^{\frac{\pi}{2}w}-1}{e^{\frac{\pi}{2}w}+1}\right|. $$ Update. As David mentioned in the comments, in order to compute the product $\prod_{k\in\mathbb{Z}}\frac{t-4k}{t-(4k+2)}$ we can use the following formula: $$ \frac{\sin\pi z}{\pi z}=\prod_{k=1}^{\infty}\left(1-\frac{z^2}{k^2}\right), $$ or $$ \sin\pi z=\pi z\prod_{k\in\mathbb{Z}\backslash\{0\}}\frac{k-z}{k}. $$ For $z=t/4$ and $z=(t-2)/4$ we have $$ \sin\frac{\pi t}{4}=\frac{\pi t}{4}\prod_{k\in\mathbb{Z}\backslash\{0\}}\frac{4k-t}{4k} $$ and $$ \sin\frac{\pi (t-2)}{4}=\frac{\pi (t-2)}{4}\prod_{k\in\mathbb{Z}\backslash\{0\}}\frac{4k+2-t}{4k}, $$ respectively. Hence, $$ \frac{\sin\pi t/4}{\sin\pi (t-2)/4}=\prod_{k\in\mathbb{Z}}\frac{t-4k}{t-(4k+2)}. $$ Therefore, $$ \prod_{k\in\mathbb{Z}}\left|\frac{w-4ki}{w-(4k+2)i}\right|=[t=w/i]=\prod_{k\in\mathbb{Z}}\left|\frac{t-4k}{t-(4k+2)}\right|=\left|\frac{\sin\pi t/4}{\sin\pi (t-2)/4}\right|= \\ =\left|\frac{e^{i\pi t/4}-e^{-i\pi t/4}}{e^{i\pi (t-2)/4}-e^{-i\pi (t-2)/4}}\right|=\left|\frac{e^{i\pi t/4}-e^{-i\pi t/4}}{-ie^{i\pi t/4}-ie^{-i\pi t/4}}\right|=\left|\frac{e^{\pi w/2}-1}{e^{\pi w/2}+1}\right|, $$ so we're done.
What does this vector space mean?
From the context, he means that each of the coordinate functions $f_i$ is defined and continuous on $$\Omega=\prod_{i=1}^{n}{[\alpha_i, \beta_i] \subset \text{R}^n}$$ The vector space referred to is the vector of space of such functions. To put it another way, $$ \Omega=\{(x_1,x_2,\dots,x_n)|\alpha_i\leq x_i\leq\beta_i, i=1,2,\dots,n\} $$ This isn't what the paper says, but I feel pretty sure that's what it means. I'm not certain, though, about the inequalities. Perhaps they are meant to be strict.
Are there any non-analytic smooth functions that aren't piecewise or dependent on infinite sums?
All elementary functions are analytic on any open set of their domain. Compositions of analytic functions are analytic. Sums, products, and quotients, of analytic functions, are also analytic (as long as the function we divide by is non-zero). Therefore it is not possible to find a function which is smooth at a point yet non-analytic there, unless you step outside of finite compositions/sums/products/quotients of elementary functions. That is, to get a smooth non-analytic function out of elementary functions, the function which you define has to be defined 'piecewise'. Of course there are also non-elementary functions.
Trying to form a formula for a specific sequence
It works ! If $a_1=5$ and $a_n=3a_{n-1}-4$ for $n \ge 2$, then a straightforward inductive proof gives $a_n=3^n+2$. Try the proof !
Prove that $F[x,y]/\langle x^2-y\rangle$ is never isomorphic to $F[x,y]/\langle x^2-y^2\rangle$, where $F$ is a field
Your technique is not valid because $F$ can be any field, for instance $\mathbb F_2$. In fact, the first is an integral domain, while the second is not (and this doesn't depend on the field).
Excercise 4.3 (3) in Brezis. Convergence in $L_p$
The only remaining thing to is show that $\int_\Omega h_n\to 0$, where $h_n(x)=\left\lvert f(x)\left(g_n(x)-g(x)\right)\right\rvert^p$. Since $f$ belongs to $\mathbb L^p$, the quantity $\left\lvert f(x) \right\rvert^p$ is finite for almost every $c$ hence the fact that $h_n(x)\to 0$ follows from the implication (if $c\geqslant 0$ and $a_n\to 0$) then $c\cdot a_n\to 0$. For the domination condition, let $M:=\sup_{n\geqslant 1}\left\lVert g_n\right\rVert_\infty$. Then $\left\lvert g(x)\right\rvert\leqslant M$ for almost every $x$ hence $$\left\lvert h_n(x) \right\rvert\leqslant \left(2M\right)^p\left\lvert f(x) \right\rvert^p \mbox{ a.e.}$$
norm of a complex with imaginary power
Write $z = 1 + 2(-1)^{\frac{1}{3}} = 1 + 2 (e^{i\pi})^{\frac{1}{3}} = 1 + 2e^{\frac{\pi}{3}i} = 1 + 2[\cos \frac{\pi}{3} + i \sin \frac{\pi}{3} ] = 2 + i\sqrt{3}$. $|z|^{2} = z \cdot \overline{z} = 4 + 3 = 7$.
Invertbility of an element in a subalgebra.
Suppose that $b+u1_A$ is invertible, $(b+u1_A)(b'+u'1_A)=1_A=bb'+u'b+ub'+uu'1_A$ since $B$ is a subalgebra $bb'+u'b+ub'=0, uu'=1$ and $(b+u1_B)(b'+u'1_B)=1_B$.
continuous mapping is determined by its values on a dense subset of its domain
The bold line is true because of continuity of $f$ and $g$. This is because sequential continuity is equivalent to metric continuity. That is, the $\epsilon-\delta$ definition of pointwise continuity of a function $f$ is the same as $\lim_{n\to\infty}f(x_{n})=f(x)$ for all sequences $(x_{n})_{n\in\mathbb{N}}$ that converge to $x$.
Calculating the percentage increase
Notice, the percentage increase is calculated on the basis of initial value, in this case percentage increase is $$=\frac{\text{(final number of people)}-\text{(initial number of people)}}{\text{(initial number of people)}}\times 100$$$$= \frac{18-3}{3}\times 100=\frac{15}{3}\times 100 =500\ \text{%}$$
Proof of A $\cap$ $B^c$ = $\emptyset$ $\implies$ A $\subseteq$ B
Correct, but it can be done a lot simpler: Suppose $x \in A$. Now suppose $x \not \in B$. Then $x \in B ^C$, and so $x \in A \cap B ^C$, which contradicts $A \cap B^C = \emptyset$. So, $x \in B$. So $A \subseteq B$
Prove there is only one solution to the Diophantine equation $p^n - p = q^m - q$ where $p$ and $q$ are odd primes $p\gt q$
Progress If $p\neq q$, we can notice that: $$p^n-p\equiv 0 \pmod q$$ $$p^{n-1}-1 \equiv 0 \pmod q$$ $$p^{n-1}\equiv 1 \pmod q$$ Since I can apply this modulo symmetrically $n,m$ are primes or Carmichael Numbers.
Change of real eigenvalues under symmetric perturbation
The answer to both questions is no. For instance, consider $$ A = \pmatrix{-2&-1\\1&1}, \quad D = \pmatrix{3&0\\0&0}. $$ $A$ has $2$ real eigenvalues, $A + D$ has none.
Can every polynomial generate an ideal?
Yes. More generally, let $R$ be a ring. For any $a\in R$, the subset $\langle a\rangle$ is the principal ideal of $R$ generated by $a$. This ideal is the smallest ideal of $R$ containing $a$. A proof can be found here. In your case, $R$ is some polynomial ring $R=A[x_1,\dotsc,x_n]$ for a ring $A$.
Existence of a Borel probability measure
Take $\mu=\sum_{k\geq 1} \delta_{x_k}/2^k$, where $(x_k)$ is a countable dense set.
trouble in getting triangle inequality
Let's prove a version of the Minkowski inequality: $$ \sqrt{\sum_{i=1}^na_i^2}+\sqrt{\sum_{i=1}^nb_i^2}\geq{\sqrt{\sum_{i=1}^n(a_i+b_i)^2}}\tag{$*$}. $$ Squaring both sides and expanding the RHS, we see that (*) is equivalent to $$ \sqrt{\sum_{i=1}^na_i^2}\sqrt{\sum_{i=1}^nb_i^2}\geq\sum_{i=1}^n a_ib_i $$ which is true because it's just the Cauchy-Schwarz inequality. (You can prove it by noting that $0\leq |a-b\lambda|^2$ with $a=(a_1,\ldots,a_n)$ and $b=(b_1,\ldots,b_n)$ and considering the discriminant of $|a-b\lambda|^2$, which is a quadratic in $\lambda$.) Apply (*) with $a_i=x_i-y_i$ and $b_i=y_i-z_i$, we have $$ {\sqrt{\sum_{i=1}^n(z_i-x_i)^2}}\leq\sqrt{\sum_{i=1}^n(x_i-y_i)^2}+\sqrt{\sum_{i=1}^n(y_i-z_i)^2}\leq\sqrt{\sum_{i=1}^\infty(x_i-y_i)^2}+\sqrt{\sum_{i=1}^\infty(y_i-z_i)^2} $$ which is $d(x,y)+d(y,z)$. Now the claim follows by letting $n\to\infty$ in the leftmost expression above to get $d(x,z)\leq d(x,y)+d(y,z)$. In your approach, you wanted to say $\sqrt{\sum_{n=1}^{\infty}[(x_{n}-y_{n})^{2} +(y_{n}-z_{n})^{2}]}$ is greater than or equal to $\sqrt{\sum_{n=1}^{\infty} (x_n-z_{n})^{2}}$ presumably by term-by-term comparison. If so, then you would run into a problem because it's not necessary that $a_i^2+b_i^2\geq(a_i+b_i)^2$.
Sylow subgroups of $A_{n}$
Hint: $K_{p,n}$ is a normal subgroup of $A_n$.
Exact sequence of vector bundles
This is not true unless the sequence splits. The only thing which is true in general is $\Lambda^{m}E'\otimes\Lambda^{n}E"\cong\Lambda^{m+n}E$ where $m,n$ are ranks of $E',E"$. Just as an illustration, let $m=n=1$. Then $\Lambda^. E'=\mathcal{O}\oplus E'$ and similarly for $E"$. So, the left side in your isomorphism is $\mathcal{O}\oplus (E'\oplus E")\oplus E'\otimes E"$, while the right side is $\mathcal{O}\oplus E\oplus \Lambda^2 E$, the last term as I said is just $E'\otimes E"$. In general these are not even isomorphic as vector bundles, unless $E\cong E'\oplus E"$.
How to prove $ \phi(n) = n/2$ iff $n = 2^k$?
Edit: removed my full answer to be more pedagogical. You know that $\varphi(p) = p-1$, but you need to remember that $\varphi(p^k) = p^{k-1}(p-1).$ Can you take it from here?
Modulus transformation
Let $a^b = dn + e$, where $0 \le e \lt n$ and $a^c = fn + g$, where $0 \le g \lt n$. Then your left side is $$((a^b \ \text{mod} \ n) * (a^c \ \text{mod} \ n)) \ \text{mod} \ n = eg \ \text{mod} \ n \tag{1}\label{eq1}$$ Since $a^{b+c} = (a^b)(a^c) = (dn + e)(fn + g) = dfn^2 + (dg + ef)n + eg$, your right side is $$a^{b+c} \ \text{mod} \ n = eg \ \text{mod} \ n \tag{2}\label{eq2}$$ Thus, your LHS and RHS match.
If $1, \omega_1,\omega_2...\omega_6$ are $7^{th}$ roots of unity, then find the value of $Im(\omega_1+\omega_2+\omega_4)$
Hint: The quadratic residues modulo $7$ are precisely $1$, $2$ and $4$. Alternatively, you could use the fact that $$\omega_k=\cos(\tfrac{2\pi}{7}k)+\sin(\tfrac{2pi}{7}k)i,$$ to note that $$\operatorname{Im}(\omega_1+\omega_2+\omega_3)=\sin(\tfrac{2pi}{7})+\sin(2\tfrac{2pi}{7})+\sin(4\tfrac{2pi}{7}).$$ Perhaps you could then aply some trigonometric identities if this expression isn't satisfactory.
Continuous Functions and Their Product
First figure out what is it exactly that you need to prove. Continuity of $f\cdot g$ at $0$ means that the limit $\lim_{x\to 0}f(x)\cdot g(x)$ exists and equals $f(0)\cdot g(0)$, which, since $f(0)=0$, is simply $0$. So, you need to show that $\lim_{x\to 0}f(x)\cdot g(x)=0$. Now you can use the $\epsilon$-$\delta$ definition to establish that. You know that $g$ is bounded by some constant $M>0$ and that $\lim_{x\to 0}f(x)=0$. So, given any $\epsilon>0$, consider using $\frac{\epsilon}{M}$ in the definition of limit for $\lim_{x\to 0}f(x)=0$. Done correctly, this should give you the desired proof.
Probability of getting a head on even No of tosses: why it is 1/3 and not 1/2?
Because $X_{\rm odd} \ne X_{\rm even}$. The most probable result is to get a head with the first toss (we have $P(X = 1) = \frac 12$), and $1$ is odd.
Symbolic Dynamics - Infinite Shift Space of Finite Type with Entropy 0
The simplest example occurs for two symbols. Consider the $2\times2$ transition matrix $$ A=\begin{pmatrix} 1 & 1\\ 0&1\end{pmatrix} $$ (or its transpose). The spectral radius is $1$ but the corresponding topological Markov chain has infinitely many sequences. You can obtain many other examples of a similar type: think of a graph that after a while gets pushed into a fixed symbol.
Properties of a permutation matrix
Hint: By the pigeonhole principle, there exist integers $m<n$ such that $P^m=P^n$. Conclude that we necessarily have $P^{n-m}=I$.
If $(A,B,C,D)$ and $(A,B',C,D')$ are parallelograms, then $(B,B',D,D')$ is a parallelogram
The essence of the problem is the commutativity and associativity of addition of vectors in the vector space associated with the affine plane. In the affine plane there exists a unique translation $T$ which maps point $A$ to point $C$ given by addition of a vector $AC$. This translation also maps line segment $AB$ to $CD$ because they are parallel to each other by assumption that $ABCD$ is a parallelogram, and also maps point $C$ to point $D$ by commutativity. Any translation preserves the parallel line relationship and carries a line to a line parallel to it. If now $AB'CD'$ is another parallelogram, then by transitivity of the parallel relation $BD$ and $B'D'$ are parallel because they are both parallel to $AC$. Since by assumption $AB'CD'$ is a parallelogram, line segment $AB'$ is parallel to $CD'$ and thus the translation $T$ maps $B'$ to $D'$. But now the line segment $BB'$ is mapped to $DD'$ by translation $T$ by using associativity and thus are parallel to each other. This proves that $BB'DD'$ is a parallelogram. In more detail, given point $B'$, there are vectors $v_1=B'A$, $v_2=AB$, and $v_3=B'D'$ which uniquely determine the six points $ABCDB'D'$ by translations using commutativity and associativity of addition of vectors. Thus, stating that $AB'CD'$ is a parallelogram is equivalent to $v_1+v_3=v_3+v_1$ since $v_1$ maps $B'$ to $A$ and then $v_3$ maps $A$ to $C$ while, also, $v_3$ maps $B'$ to $D'$ and $v_1$ maps $D'$ to $C$ by commutativity. Similarly, stating that $ABCD$ is a parallelogram is equivalent to $v_2+v_3=v_3+v_2$ since $v_2$ maps $A$ to $B$ and $v_3$ maps $B$ to $D$ while, also, $v_3$ maps $A$ to $C$ and $v_2$ maps $C$ to $D$. Finally, stating that $BB'DD'$ is a parallelogram is equivalent to $(v_1+v_2)+v_3=v_3+(v_1+v_2)$ but $v_1+v_2$ maps $B'$ to $B$ and $v_3$ maps $B$ to $D$ while $v_3$ maps $B'$ to $D'$ and $v_1+v_2$ maps $D'$ to $D$.
Number of paths between two points on a directed grid
Yes. Your answer is correct. The solution to the general problem is that you must take X right steps, and Y down steps then the number of paths is simply the ways of choosing where to take the up (or right) step. i.e. ((X+Y),X)=((X+Y),Y) For example, you are traversing graph points of equal distance i.e. squares then there are 6 right steps and 4 up step : ((6+4),4) = ((6+4),6) = (10,4) = (10,6) = 210 P.S (a,b) -> a! / b!
How would I add/subtract weight from ratings to get a weighted average rating?
John Doe's comment was just what I needed, but since it's just a comment I can't choose it as the answer. Anyway, I wrote a brief function that gives more consistent results than my old code. It's also much simpler and shorter. numberOf is the number of songs with a rating, Rating is the rating, and bonus_penalty is how much I'm adding/subtracting based on distance from neutral_rating. I simply do this for each rating, add the results for each function call, and then divide by the number of songs. I believe this is more-or-less what John Doe was proposing, and it appears to work as it should. Thanks! Function ScoreForRating(numberOf As Integer, Rating As Integer, bonus_penalty As Double, neutral_rating As Integer) Dim adjustment As Double adjustment = 1 + numberOf * ((Rating - neutral_rating) * bonus_penalty) ScoreForRating = (Rating * numberOf) * adjustment End Function
Group of 28 divided into 4 teams, probability of at least 1 girl on each team when there are 7 girls.
Label the teams as $1,2,3,4$. The total number of ways to form the $4$ teams is $$ n = \binom{28}{7} \binom{21}{7} \binom{14}{7} \binom{7}{7} $$ Let $v$ be the list of counts, sorted in ascending order, for the number of girls on each of the $4$ teams. Consider $3$ cases . . . Case $(1)$:$\;v=[1,1,1,4]$. For case $(1)$, the number of ways to form the $4$ teams is $$ x_1 = \left(\binom{4}{1}\binom{3}{3}\right) {\cdot} \left(\binom{7}{4}\binom{21}{3}\right) {\cdot} \left(\binom{3}{1}\binom{18}{6}\right) {\cdot} \left(\binom{2}{1}\binom{12}{6}\right) {\cdot} \left(\binom{1}{1}\binom{6}{6}\right) $$ hence the probability for case $(1)$ is \begin{align*} p_1&=\frac{x_1}{n}\\[4pt] &={\Large{\frac { \left(\binom{4}{1}\binom{3}{3}\right) {\cdot} \left(\binom{7}{4}\binom{21}{3}\right) {\cdot} \left(\binom{3}{1}\binom{18}{6}\right) {\cdot} \left(\binom{2}{1}\binom{12}{6}\right) {\cdot} \left(\binom{1}{1}\binom{6}{6}\right) } { \binom{28}{7} \binom{21}{7} \binom{14}{7} \binom{7}{7} }}} \\[4pt] &=\frac{2401}{59202}\approx .04055606230\\[4pt] \end{align*} Case $(2)$:$\;v=[1,1,2,3]$. For case $(2)$, the number of ways to form the $4$ teams is $$ x_2 = \left(\binom{4}{1}\binom{3}{1}\binom{2}{2}\right) {\cdot} \left(\binom{7}{3}\binom{21}{4}\right) {\cdot} \left(\binom{4}{2}\binom{17}{5}\right) {\cdot} \left(\binom{2}{1}\binom{12}{6}\right) {\cdot} \left(\binom{1}{1}\binom{6}{6}\right) $$ hence the probability for case $(2)$ is \begin{align*} p_2&=\frac{x_2}{n}\\[4pt] &={\Large{\frac { \left(\binom{4}{1}\binom{3}{1}\binom{2}{2}\right) {\cdot} \left(\binom{7}{3}\binom{21}{4}\right) {\cdot} \left(\binom{4}{2}\binom{17}{5}\right) {\cdot} \left(\binom{2}{1}\binom{12}{6}\right) {\cdot} \left(\binom{1}{1}\binom{6}{6}\right) } { \binom{28}{7} \binom{21}{7} \binom{14}{7} \binom{7}{7} }}} \\[4pt] &=\frac{2401}{6578}\approx .3650045607\\[4pt] \end{align*} Case $(3)$:$\;v=[1,2,2,2]$. For case $(3)$, the number of ways to form the $4$ teams is $$ x_3 = \left(\binom{4}{3}\binom{1}{1}\right) {\cdot} \left(\binom{7}{2}\binom{21}{5}\right) {\cdot} \left(\binom{5}{2}\binom{16}{5}\right) {\cdot} \left(\binom{3}{2}\binom{11}{5}\right) {\cdot} \left(\binom{1}{1}\binom{6}{6}\right) $$ hence the probability for case $(3)$ is \begin{align*} p_3&=\frac{x_3}{n}\\[4pt] &={\Large{\frac { \left(\binom{4}{3}\binom{1}{1}\right) {\cdot} \left(\binom{7}{2}\binom{21}{5}\right) {\cdot} \left(\binom{5}{2}\binom{16}{5}\right) {\cdot} \left(\binom{3}{2}\binom{11}{5}\right) {\cdot} \left(\binom{1}{1}\binom{6}{6}\right) } { \binom{28}{7} \binom{21}{7} \binom{14}{7} \binom{7}{7} }}} \\[4pt] &=\frac{7203}{32890}\approx .2190027364\\[4pt] \end{align*} Hence the probability that each team has at least one girl is $$ p = p_1+p_2+p_3 = \frac{2401}{59202} + \frac{2401}{6578} + \frac{7203}{32890} = \frac{16807}{26910}\approx .6245633593 $$
Conditional Probability for dice rolling
The dice is rolled until a 3 or 6 appears, or until four throws have been made, whichever comes earlier. (The fourth throw can be 3 or 6.) If we have already rolled the dice twice, here are the possibilities: 3 or 6 gets rolled on the third roll. This has probability 1/3. The third roll doesn't produce a 3 or 6. This means we go to a fourth and final roll, and the probability of that is 2/3. Since these are probabilities conditional on the dice being rolled twice without a 3 or 6 showing, 2/3 is the answer.
How to prove that Christoffel symbols are not components of a tensor
I found it. I can't write $\Gamma:=\Gamma_{ij}^kdx^idx^j\frac{\partial}{\partial x^k}$ since I am assuming that Christoffel symbols are components of the tensor $\Gamma$.
Is $(\neg p \lor (p \rightarrow q)) \rightarrow\neg p$ a Tautology?
Hint: See $p = F $ and $q = T $.
Limit of $\frac{\pi^h-1}{h}$ as h approaches zero
There's no way we can avoid $\pi$ here. The value of this limit is $\log\pi$. Despite the method we use to get that limit, the exact value is $\log\pi$ and there's no way to avoid that. We can, however, do this without limits. Let's say $\pi=e^{\ln \pi}$ $$\lim_{h \to 0} \frac{e^{h\ln\pi}-1}{h}$$ Now, we take @SangchiLee's advice and say $t=h\log \pi$, so that $h=\frac{t}{\log \pi}$: $$\lim_{t \to 0} \frac{e^t-1}{\frac{t}{\log \pi}}$$ Simplify: $$\lim_{t \to 0} \log\pi\frac{e^t-1}{t}$$ Take the $\log\pi$ out of the limit: $$\log\pi\lim_{t \to 0} \frac{e^t-1}{t}$$ Now, the limit on the right is equal to $1$, so we have: $$\log\pi\cdot 1=\log\pi$$
What group is $U(21)/\langle4\rangle$ isomorphic to?
You're off to a good start; you've found that $U(21)/\langle 4\rangle$ has four elements, and you've found representatives for the cosets in $U(21)$. This allows you to make a multiplication table. There aren't many groups of $4$ elements, so you will quickly see which group it is.
Prove that the unit ball in $X$ is not compact
One of the more intuitive definitions/characterizations of compactness in a metric space $(X,d)$ is the following: A set $K \subseteq X$ is compact if and only if every sequence $\{a_k\} \subseteq K$ has a convergent subsequence. So in order to show that the unit ball in the space of all sequences is not compact, you have to find a sequence (of sequences) in the unit ball that does not contain a convergent subsequence.
Solve: $4 \log_5 x- \log_5y =1 \quad \&\quad 5\log_5 x+ 3\log_5 y =14$
hint: Put $a = \log_5x, b = \log_5y\implies 4a-b=1 = 5a+3b=14\implies 5a+3(4a-1)=14...$ . Can you finish it ? point is solve for $a$, then for $x$ since $x = 5^a$...
"Continuity" of minimum of a function
I'll restate the problem as I understand it, please correct me if I am wrong. Let $\Pi$ be a subset of $\mathbb{R}^d$ such that for $\pi \in \Pi$, $\sum_i \pi_i = 1, \pi \geq 0$. Furthermore, suppose that the closure of $\Pi$ is equal to the closure of the interior of $\Pi$. Let $\Pi_n$ be those elements of $\Pi$ whose components are rational with a denominator that can be set to $n$. Let $f : \mathbb{R}^d \to [0, \infty)$ be a continuous convex function. Show $$\lim_{n \to \infty} \inf_{\pi_n \in \Pi_n} f(\pi_n) = \inf_{\pi \in \Pi} f(\pi)$$ Let $L = \inf_{\pi \in \Pi} f(\pi)$ and choose a sequence $p_m \in \Pi$ such that $$\lim_{m\to\infty} f(p_m) = L$$ Since $\Pi$ has the property that its closure equals the closure of its interior, we know that we can instead choose another sequence $\pi_m$ such that $\pi_m$ is the center of a ball contained in $\Pi$ for all $m$ and $f(\pi_m)$ also converges to $L$: $$\lim_{m\to\infty} f(\pi_m) = L$$ Let $Q_n$ denote the set of all vectors in $\mathbb{R}^d$ where $\sum_i \pi_i = 1, \pi \geq 0$, and whose components are rational with a denominator that can be set to $n$. Now for each $m$, let $B_m$ denote the ball with $\pi_m$ at it's center. Since the interior of $\Pi \subset \Pi$ that for some $n$ the set $Q_n \cap B_m \subset \Pi_n$. So for each $m$ choose an $n \geq m$ and element $\pi'_n$ of $\Pi_n$ that lies within a distance of $2/n$ of $\pi_m$. Denote this mapping from $m$ to $n$ by $n = g(m)$. Now this sequence $\pi'_{g(m)}$ is cauchy with respect to $\pi_m$. Therefore by the continuity of $f$: $$\lim_{m \to \infty} f(\pi_{g(m)}') = \lim_{m \to \infty} f(\pi_m) = L$$
Find a function for well form brackets using generating functions
If Hans Lundmark is right, you could see Wikipedia on the Catalan numbers, where the generating function is given under "Proof of the Formula" There are also many references in OEIS
How to calculate the area of a circle ( given: origin, radius ) on a sphere ( Earth )?
Just a small amplification of the answer by Ross Millikan. Use the same notation as the article he linked to. I take it that your $1000$ km is the surface of the Earth distance from the center $C$ of your circle to the furthest points $P$ from the center. In the picture linked to, $C$ is the top of the sphere, and $P$ is any point on outer edge of the bottom of the cap. Assume that this surface of the Earth distance is $d$, and that is is $\le$ $1/4$ of the circumference of the Earth (that's not necessary, but it makes visualization easier). Let the radius of the Earth be $r$. Then the angle $\theta$ subtended by the arc $CP$ at the centre of the Earth is given by $$\theta=\frac{d}{r}.\tag{$\ast$}$$ The "$h$" in the linked picture is given by $h=r-r\cos\theta$. The surface area is $2\pi rh$, which is $$2\pi r^2(1-\cos\theta).\tag{$\ast\ast$}$$ Compute $\theta$ using $(\ast)$, and then use $(\ast\ast)$ to find the surface area.
Heat equation with logarithm of the unknow function
If you try to write a solution as a complex Fourier Series, this should break down to a series expansion of $ax+b$. Of course, you will need two boundary conditions to fully solve the equation.
Support of a measure on Borel sets not well defined.
Support is defined in that article as the smallest closed set $S$ such that $\mu (S^{c})=0$. This is the usual definition of supprt and it is the most convenient. With this definition the complement of the support is the largest open set of measure $0$. If you consider the collection of all open sets of measure $0$ their union is cetainly open but it may not have measure $0$ since an uncountable union of sets of measure $0$ need not have measure $0$. That is the reason why not every Radon measure has a support. On any second countable topological space every Radon measure has a support.
Maximising an expression
By AM-GM $$x^2y-xy^2=xy(x-y)\le x\cdot \left(\frac {y+(x-y)}2\right)^2=\frac{x^3}4\le \frac14$$ with equality at the first $\le$ iff $y=x-y$ (i.e. $x=2y$) and at the second $\le$ iff $x=1$. Hence $$x^2y-xy^2\le\frac 14$$ with equality iff $x=1$, $y=\frac 12$.
Please find the radius of the circle.
Strictly speaking, you do not need the information shown in blue, because you can deduce it from the other information shown on the diagram. Hence there is really no reason why that particular piece of information is given. The important thing in this problem is that you can determine exactly where the red circle intersects the $45$-degree diagonal line as shown in the diagram. You know that the green circle has radius $20$ (because two of the radii are explicitly labeled $20$), so you know the portion of the diagonal line between the circumference of the green circle and the center of the green circle (the green dot) is $20$. To determine exactly where the red circle intersects that line, you can use the information that the distance from that point to the green dot is $18$ (as shown on the diagram), or you can use the information that the distance from that point to the circumference of the green circle is $2$, which means the distance to the green dot is $18$. The reason it is important to know where the red circle intersects the diagonal line is that three points determine a circle. You have two points on the red circle (where it intersects the green circle), so this third point determines the center and radius of the red circle. That is the abstract reason why the information in the diagram should be sufficient to answer the question. But as to exactly how to use that information, here are some hints: Use the two perpendicular black lines as axes of a Cartesian coordinate system such that the green dot is at $(0,0)$ and the two intersections of the two circles are at $(0,20)$ and $(-20,0)$. In that case, we can see that the red circle also passes through the point $(-18\cos\frac{\pi}{4},18\sin\frac{\pi}{4}) = (-9\sqrt{2},9\sqrt{2})$; that's where the red circle intersects the diagonal line in the upper left quadrant. Let the coordinates of the red dot be $(x_0,y_0)$. Then $y_0 = -x_0$, so we can write the coordinates as $(x_0,-x_0)$. The distance from the red dot to $(0,20)$ is $\sqrt{x_0^2 + (-x_0 - 20)^2}$ and the distance from the red dot to $(-9\sqrt{2},9\sqrt{2})$ is $\sqrt{(x_0 + 9\sqrt{2})^2 + (-x_0 - 9\sqrt{2})^2}$. These two distances must be equal (because each is a radius of the red circle), that is, $$\sqrt{x_0^2 + (-x_0 - 20)^2} = \sqrt{(x_0 + 9\sqrt{2})^2 + (-x_0 - 9\sqrt{2})^2}.$$ Solve for $x_0$. (Hint: simplify the last equation so that you can apply the quadratic formula.) With that information, it is easy to find the radius of the circle.
Integral representation of the modified Bessel function involving $\sinh(t) \sinh(\alpha t)$
$K_\alpha(x)=\int_0^\infty e^{-x\cosh t}\cosh\alpha t~dt$ $\alpha K_\alpha(x)=\alpha\int_0^\infty e^{-x\cosh t}\cosh\alpha t~dt$ $=\int_0^\infty e^{-x\cosh t}~d(\sinh\alpha t)$ $=[e^{-x\cosh t}\sinh\alpha t]_0^\infty-\int_0^\infty\sinh\alpha t~d(e^{-x\cosh t})$ $=\int_0^\infty e^{-x\cosh t}x\sinh t\sinh\alpha t~dt$ $\therefore\dfrac{\alpha}{x}K_\alpha(x)=\int_0^\infty e^{-x\cosh t}\sinh t\sinh\alpha t~dt$
When do the Nakano identities hold?
The stronger version of Nakano's identity that you have mentioned cannot be true. You wrote "Let X be a Kähler manifold and (E,h) a holomorphic hermitian vector bundle. Then if ∇ is any connection on E: $[\Lambda, \bar{\partial}] = -i(\nabla^{1,0})^*$" One can get a contradiction to this more general statement in the following way. Indeed, we know the Nakano identity holds for the Chern connection, so just cook up a new connection by adding to the Chern connection a non-trivial (1,0) form with values in End(E) (or with values in the bundle of skew-hermitian endomorphisms of E with respect to $h$, if you insist on the connection being compatible with $h$). Then you see that the right-hand side of the above equation gives two different answers when applied to the Chern connection and when applied to the new connection. I hope this helps.
Derivation of expected improvement for Bayesian Optimization.
I assume that $x^+=\text{argmax}_{i<t+1}f(x_i)$. You can evaluate the integral by substitution: try $u=(\mu(x)-f(x^+)-I)/\sigma(x)$ s.t. $I = \mu(x)-f(x^+)-\sigma(x)u$ and change the lower limit accordingly. This gives 3 separate integrals of that can be expressed in terms of the pdf and cdf of $\mathcal{N}(0,1)$, which sum to the expression above.
Continuous extension of a real function
A constructive and explicit proof proceeds as follows. Since $E$ is closed, $U=\mathbb{R}\setminus E$ is a countable union of disjoint open intervals, say, $U=\bigcup (a_n,b_n)$. Necessarily, we must have that $a_n,b_n\in E$. Define $f(x)$ as follows. $$ f(x) = \begin{cases} g(x) &\text{if }x\in E \\ \frac{x-a_n}{b_n-a_n}g(b_n)+\frac{b_n-x}{b_n-a_n}g(a_n) & \text{if }x\in[a_n,b_n] \end{cases} $$ Notice first that $f(x)$ is well-defined and also, for all $x\in(a_n,b_n)$, either $g(a_n)\le f(x)\le g(b_n)$ or $g(b_n)\le f(x)\le g(a_n)$ depending on whether $g(a_n)\le g(b_n)$ or otherwise. Clearly, $f$ is continuous on $U$. Now suppose that $x\in E$ and $\epsilon>0$. Then there are a few cases. Case 1: Suppose that for every $\eta>0$, $(x-\eta,x)\cap E\not=\emptyset$ and $(x,x+\eta))\cap E\not=\emptyset$. Then since $f\vert_E=g$, there is some $\delta>0$ such that if $y\in E$ and $\vert x-y\vert<\delta$ then $\vert f(x)-f(y)\vert<\epsilon$. Because of the condition we have for Case 1, we may choose some $x_1,x_2\in E$ with $x-\delta<x_1<x<x_2<x+\delta$. Choose $\delta'=\min\{x-x_1,x_2-x\}$. If $\vert y-x\vert<\delta'$, then if $y\in E$, we're done. If $y\in U$, then $y\in(a_m,b_m)$ for some $m\in\mathbb{N}$. Furthermore, $a_m,b_m\in E$ and are within $\delta$ of $x$. Also, $f(y)$ is lies between $g(a_m)$ and $g(b_m)$. Thus $f(y)$ is within $\epsilon$ of $f(x)$ since $f(a_m)=g(a_m)$ and $f(b_m)=g(b_m)$ are within $\epsilon$ of $f(x)$. Case 2: There is some $\eta>0$ for which $(x-\eta,x)\cap E=\emptyset$ or $(x,x+\eta)\cap E=\emptyset$. In this case, $x$ is an endpoint of one of the intervals of $U$. Thus $f$ is linear on either $[x,x+\eta)$ or $(x-\eta,x]$ (maybe both). Certainly, we can get a $\delta>0$ corresponding to $\epsilon$ on this side of $x$. For the other side of $x$, use the argument from Case 1 to get some $\delta'$. Choosing $\delta''=\min\{\delta,\delta'\}$ proves the result.
Formula for how natural numbers n is x divisible by $2^n$ when $2^n\leq x$
If $x$ has a decomposition of $x=2^q\cdot m$ for some non-negative integers $q$ and $m$ where $m$ is odd, then $f(x,n)=1$ only if $n\in\{1,2,\cdots, q\}$. Also $\log_2x=q+\log_2 m$ hence $$ \sum_{n=1}^{\log_2 x}f(x,n)=q $$
Determining the proper function for morphisms
Here is an explanation of where those expressions come from. $ K=\{x+y\sqrt3 : x,y \in \mathbb Q \} $ is a field. $K$ is also a 2-dimensional vector space over $\mathbb Q$, with basis $\{1, \sqrt3\}$. Given $a=x+y\sqrt3 \in K$, the map $z \mapsto az$ is a $\mathbb Q$-linear transformation of $K$ whose matrix with respect to the basis $\{1, \sqrt3\}$ is $$ M(a)=\begin{pmatrix}x&3y\\y&x\\\end{pmatrix} $$ The map $M: a\mapsto M(a)$ is an injective ring homomorphism $K \to \mathbb Q^{2\times 2}$. The map $M$ induces an injective group homomorphism $K^{\times} \to {(\mathbb Q^{2\times 2})}^{\times}=GL(2,\mathbb Q)$. In this context, $G_1$ is a subgroup of $K^{\times}$ and $G_2=M(G_1)$. So $f=M$ is a group isomorphism. Note that $G_2 = \ker \det$ and $G_1 = \ker (\det \circ M)$.
Example of function on R with given property
Consider $$f(x) = \begin{cases} 0, & \text{if $x \le 0$} \\ e^{-\frac{1}{x^2}}, & \text{if $x > 0$} \end{cases}$$ Hope that will work.
A question about Theorem 2.2 in Gorenstein's book and a finite $p$-group
The answer to the first question is yes, which is quite possible. Firstly we can assume that it has more than one maximal subgroup. Let $H,K$ be cyclic maximal subgroups of $G$. Note that $H\cap K\leq Z(G)$ as $H\cap K$ both commutes with elements of $H$ and $K$ which leads that $H\cap K$ commutes with elements of $HK=G$. Notice that $G/H\cap K$ is abelain as $\overline G=\overline H\times \overline K$.
How to solve $\frac{\partial u(x,t)}{\partial t}=-axu(x,t)-bx\frac{\partial u(x,t)}{\partial x}+cx^2 \frac{\partial^2 u(x,t)}{\partial x^2}$?
$$ \frac{\partial u(x,t)}{\partial t}=-a\,x\,u(x,t)-b\,x\,\frac{\partial u(x,t)}{\partial x}+c\,x^2\, \frac{\partial^2 u(t,x)}{\partial x^2} \tag 1 $$ The method of separation of variables seems easier in this case. We look first for particular solutions on the form $u(x,t)=X(x)T(t)$. $$XT'=-axXT-bxX'T+cx^2X''T$$ $$\frac{T'}{T}=-ax-bx\frac{X'}{X}+cx^2\frac{X''}{X}=\lambda$$ $$\frac{T'}{T}=\lambda \quad\text{leads to}\quad T(t)=c_1 e^{\lambda t}$$ $cx^2X''-bxX'-(ax+\lambda)X=0\quad$ is an ODE of Bessel kind, leading to : $$X(x)=c_2 x^{-(b+c)/(2c)}I_{\pm\nu}\left(2\sqrt{\frac{ax}{c}} \right)$$ $I_\nu$ is the modified Bessel function of first kind and order $\nu=\sqrt{\left(\frac{b+c}{c}\right)^2+4\frac{\lambda}{c}}$ . A particular solution of EDP $(1)$ is : $$u=e^{\lambda t}x^{-(b+c)/(2c)}I_{\pm\nu}\left(2\sqrt{\frac{ax}{c}} \right) \tag 2$$ This particular solution depends on the parameter $\lambda$. The general solution is any linear combination of particular solutions with different $\lambda$. $$u(x,t)=\int f(\lambda)e^{\lambda t}x^{-(b+c)/(2c)}I_{\nu_{(\lambda)}} \left(2\sqrt{\frac{ax}{c}} \right)d\lambda +\int g(\lambda)e^{\lambda t}x^{-(b+c)/(2c)}I_{-\nu_{(\lambda)}}\left(2\sqrt{\frac{ax}{c}} \right)d\lambda$$ $f(\lambda)$ and $g(\lambda)$ are two arbitray functions.$\quad \nu_{(\lambda)}=\sqrt{\left(\frac{b+c}{c}\right)^2+4\frac{\lambda}{c}}$. Note : The result from MAPLE reported by Mariusz Iwaniuk in comment is not the general solution : The variable $t$ doesn't appears in it. It corresponds to the particular case $\lambda=0$ of above Eq.$(2)$ , where the variable $2\sqrt{\frac{ax}{c}}$ becomes $2\sqrt{\frac{-ax}{c}}$ and thus the modified Bessel function becomes the Bessel function $J_\nu$. Note : The arbitrary functions $f(\lambda)$ and $g(\lambda)$ have to be determined according to the boundary conditions (which are not specified in the wording of the question). This is in fact the most difficult part of the job. If some boundary conditions where specified, the method of characteristics might becomes easier than the above method of separation of variables, which in many cases leads afterwards to hard calculus in order to determine the arbitrary functions. All depends of the kind of boundary conditions.
Exponents across = signs
Both approaches are more or less correct, but your second approach is wrongly executed. There is an issue with your execution of your first approach. First an minor issue is that you should know that $2g\ne 0$, but since you divide by that in the equation one can assume that's true. Second is the step when you go from $R(2g) = V^2$ to $\sqrt{R(2g)} =V$ you're assuming that $V$ should be non-negative. The domain problem may require this, but in general one would solve for both solutions $\pm\sqrt{R(2g)} = V$. Third you have to know that $R(2g)\ge 0$, or you would not have any real solutions. Your second approach is also correct, but incorrectly executed. The correct execution is: $$R = V^2/(2g)$$ $$\pm\sqrt{R} = V/\sqrt{2g}$$ $$\pm\sqrt{R}\sqrt{2g}= V$$ $$\pm\sqrt{R(2g)} = V$$ $$V = \pm\sqrt{R(2g)}$$ The first step requires a bit of explaination, moving the exponent means to take square root of the both sides and as mentioned you have to get a $\pm$ to get both solutions. The second approach has the problem that the third issue becomes a little more complicated since you rely on both $R$ and $2g$ to be non-negative.
Integral Notation , integrating over integers.
In Lebesgue's theory of integration, one can integrate over any "measurable" set of real numbers, and non-measurable sets are exotic things not encountered in practice. (In the context of probability theory, however, sets failing to be measurable with respect to certain sigma-algebras that must be considered. But you should consider this parenthetical comment optional until you learn some of the relevant things in probability theory.) By the usual way of assigning a measure to a set of real numbers, the measure of the set of all integers is $0,$ and consequently if one integrates any function over that set one gets $0.$ But one can assign other measures to the set of all integers, in which a positive number is assigned to some or all sets containing just one integer. The most usual way to do this is that the measure of each set $\{n\}$ is $1.$ In that case, the integral over the set of all integers is just the sum of an infinite series.
Understanding $\sqrt{x^2}$ in a rational expression as x goes to infinity in Calculus
They do not violate the algebraic rule that you stated. Normally $\sqrt{x^2}=|x|$ but if x is positive $|x|=x$. So therefore $\sqrt{x^2}$ is just $x$. We have not violated any rules here because there is no sum underneath the square root.
Being Careful with open sets
It is justified. Indeed, $f$ is continuous on $[0, 1)$ iff for every $x \in [0, 1)$ and for every $\epsilon$, there is $\delta$ such that for all $y$ with $|y-x| < \delta$, $|f(y) - f(x)| < \epsilon$. This condition is true: pick $\eta = \frac{1+x}{2}$ and use that it is continuous on $[0, \eta]$.
Generating function with singularity at $x=0$
It appears that the form of the generating function you derived is incomplete. In fact, careful calculation shows that it satisfies a first order ODE similar to yours with an extra term $$A'(x)+\frac{2x+1}{2x(x+1)}A(x)=\frac{a_0}{2x(x+1)^2}+3\frac{a_1+a_0}{2(x+1)^2}$$ Multiply by the integrating factor $\sqrt{x(1+x)}$ and rewrite to obtain $$\sqrt{x(1+x)}A(x)=C+a_0\int dx\frac{1}{2\sqrt{x}(x+1)^{3/2}}+3(a_0+a_1)\int dx\frac{\sqrt{x}}{2(x+1)^{3/2}}$$ which evaluate to the final result $$A(x)=\frac{C}{\sqrt{x(1+x)}}+a_0\frac{1}{1+x}+3(a_1+a_0)\left(\frac{\text{arcsinh}(\sqrt{x})}{\sqrt{x(1+x})}-\frac{1}{1+x}\right)$$ Evidently, for the choice $a_1=-\frac{2}{3}a_0$ one obtains the solution desired, so there's no contradiction.
Understanding relation between Laurent Series and Singularities
Write it as $\frac {z-\sin\, z}{z\sin \,z}$ and apply L'Hopital's Rule twice to see that the limit as $z \to 0$ is $0$. The function has a removable singularity at $0$.
Projectivity maps that fix three points
The statement only holds for the one-dimensional case, the projective line. One way to prove it is by showing that the cross ratio remains invariant under projectivities, and verifying that given three points, the cross ratio already uniquely defines a fourth. Another approach would be by observing that a projectivity in $d$ dimensions is uniquely defined by $d+2$ points and their images, as a generalization of this post. So if $d+2$ points are fixed, the uniquely defined projectivity has to be the identity. In the general case you have to require the preimage and image points to be in generic position, e.g. no three collinear. For $d=1$ this reduces to a requirement for the points to be distinct.
What is the real and imaginary part of complex infinity?
Complex infinity is just that: $\infty$. In particular, as far as we can make sense of this at all, we have $\infty=-\infty=i\infty$ (or in fact $\infty=z\infty$ for any finite non-zero $z\in\Bbb C$). It makes no sense to speak of its real or imaginary part or its argument. In spite of the same symbol being used, the $\infty$ that is added to $\Bbb C$ in its one-point compactification is not directly related to that $\infty$ that is (together with $-\infty$) added to $\Bbb R$ in its two-point compactification. (There is also a one-point compactification of $\Bbb R$, and the $\infty$ used for that may perhaps be identified with the complex infinity if we view the - compactified - real line as a great circle of the Riemann sphere). I'd even say that the uses of the symbol $\infty$ in things like $\sum_{n=1}^\infty a_n$ or $\|f\|_\infty$ are only loosely related to these as well. (And we haven't even started with infinities occurring as cardinalities and being written with a whole different bunch of symbols)
sequence of bounded domain in $R^n$
The second inclusion is sometimes false. E.g. in $\mathbb R$, $\Omega_k=(0,1+1/k)$.
Integrate $\int\frac{1}{x^6} \sqrt{(1-x^2)^3} ~ dx$
Let $x = \sin(\theta)$. We then get $dx = \cos(\theta) d \theta$. Hence, $$I= \int \frac{\cos^3(\theta)}{\sin^6(\theta)} \cos(\theta) d \theta = \int \cot^4(\theta) cosec^2(\theta) d \theta$$ Let $\cot(\theta) = t$, then $-cosec^2(\theta) d \theta = dt$. Hence, $$I = -\int t^4 dt = -\frac{t^5}{5} + c = -\frac{\cot^5(\theta)}{5} + c = -\frac15 \left( \frac{\sqrt{1-x^2}}{x} \right)^5 + c$$
Triangle perimeter and area
Let the side lengths of the triangle be $a,b,c$, and let $P = a+b+c$. By Heron's formula, $$A = \sqrt{\frac P2\left(\frac P2-a\right)\left(\frac P2-b\right)\left(\frac P2-c\right)}$$ From the given information, $$A = 2\sqrt{P}$$ Equating the two and check that $P\ne 0$, $$\begin{align*} \sqrt{\frac 12\left(\frac P2-a\right)\left(\frac P2-b\right)\left(\frac P2-c\right)} &= 2\\ \left(\frac P2-a\right)\left(\frac P2-b\right)\left(\frac P2-c\right) &= 8\\ (P-2a)(P-2b)(P-2c) &= 64\\ (b+c-a)(c+a-b)(a+b-c) &= 64 \end{align*}$$ $a, b,c$ are all positive integers and are all different, so $P-2a, P-2b, P-2c$ are different positive integral factors of $64$. First, all decomposed factor triplet with a $1$ is rejected. (Why?) Then the only decomposition with different integers is $2\times4\times8$. Without loss of generality, let $$\begin{align*} P-2a &= 2\\ P-2b &= 4\\ P-2c &= 8\\ 3P - 2(a+b+c) &= 2+4+8\\ P &= 14\\ a &= 6\\ b &= 5\\ c &= 3 \end{align*}$$ So the triangle is a $6-5-3$ triangle.
Special rules regarding diagonal entries for covariance matrices
$E(X_1-\mu_1)(X_1-\mu_1)=E(X_1^{2}+\mu_1^{2}-2\mu_1X_1)=EX_1^{2}+\mu_1^{2}-2\mu_1\mu_1$ by linearity of expectation and the fact that expectation of a constant is the constant itself. Hence $E(X_1-\mu_1)(X_1-\mu_1)=EX_1^{2}-(\mu_1)^{2}=EX_1^{2}-(EX_1)^{2}=var X_1$. Similarly handle the others.
Is $\sum i^{1/i}$ bounded?
You should use the following fact: If $a_n\to a$, then $\,\,\dfrac{a_1+\cdots+a_n}{n}\to a,\,\,\,$ as well. This is a consequence of Cesàro-Stolz Theorem. In our case $$ a_n=n^{1/n}\to 1, $$ and hence $$ \frac{1^{1/1}+2^{1/2}+\cdots+n^{1/n}}{n}\to 1. $$
A conceptual Question on Diagonalization of Matrix
Hints for (b). Denote the minimal polynomials of $ST$ and $TS$ by $m_{ST}$ and $m_{TS}$ respectively. Suppose $ST$ is diagonalisable. Since $ST$ is invertible, $0$ is not a root of $m_{ST}$. Hence $p(x)=x\,m_{ST}(x)$ is a product of distinct linear factors. Now, show that $p(TS)=0$ and hence $m_{TS}|p$. Conversely, suppose $TS$ is diagonalisable, so that $m_{TS}$ is a product of distinct linear factors. By considering $S\,m_{TS}(TS)\,T$ and by using the assumption that $ST$ is invertible, show that $m_{TS}(ST)=0$ and hence $m_{ST}|m_{TS}$.
Modular multiplicative inverse proof
What the statement means is that $5 \times 4^{-1} \equiv 3 \pmod{7}$. The modular inverse of $4$ is defined as the number $x$ such that $4x\equiv 1 \pmod{7}$ (if it exists), and it's easy to check that $x=2$ satisfies this. So $5 \times 4^{-1} \equiv 5 \times 2 \equiv 3 \pmod{7}$. Alternatively, $5/4$ can be defined as the number that when multiplied by $4$, gives $5$, and $3 \times 4\equiv 12 \equiv 5 \pmod{7}$ as required.
Ramanujan-Sums... How do they do?
In your case, consider the function $$ \frac{x}{e^{2\pi x}-1} $$ which has the Mellin transform $$ \left( 2\,\pi \right) ^{-s-1}\Gamma \left( s+1 \right) \zeta \left( s+1 \right). $$ Then just follow the technique used in this similar problem.
Degree of a differential equation
An explicit differential equation $$y^{(n)}=f(x,y,y',...,y^{(n-1)})$$ defines a ($x$-dependent) vector field $$v=(y',y'',...,f(...))$$ on the state space with coordinates $$u=(y,y',...,y^{(n-1)})$$ so that $$u'=v.$$ That it is explicit could also be describes as the highest derivative having degree 1. In an implicit ODE $$0=F(x,y,y',...,y^{(n)})$$ there might be several directions, that is solutions for $y^{(n)}$ of that equation, that satisfy the DE in a given point $(x,y,...,y^{(n-1)})$. If the highest derivative occurs as polynomial $$F=c_d(y^{(n)})^d+...+c_1y^{(n)}+c_0,$$ where its coefficients $c_k=c_k(x,y,...,y^{(n-1)})$ may depend on the independent variable and lower order derivatives, then the degree $d$ of this polynomial bounds the number of directions in that point. As one usually considers real ODE with real solutions, this bound $d$ on directions that a solution may follow in a given point may be pessimistic, as the polynomial may have many complex roots.
Solve Equation from Table
Actually this kind of function guessing problems can fall into the category of polynomial interpolation. Let's say you start with a function $(x_0,y_0)$ to $(x_1,y_1)$, and let $y = f(x)$ for easier notation, the answer is obvious: $$f(x) = \frac{(y_1-y_0)}{(x_1-x_0)}(x-x_0) + y_0$$ Let's simplify it to $$f(x) = p_0 x + k_0$$ Now you want to make $(x_2,y_2)$ also fitting to the above function. We add some things to $f(x)$ and call it $g(x)$ $$g(x) = f(x) + k*(x-x_1)*(x-x_0)$$ and try to figure out k by substituting new $g(x) = y_2$ and $x = x_2$. The later part is there because it could make the part $0$ if it uses older values that are already covered by $f(x)$, so the answer on $g(x)$ will be purely $f(x)$. This can be done recursively for all data values. (And there are mechanical method to work it out too! Try to find it yourself!) However this is not a good answer for statistical problem, since it totally ignores the fluctuation induced by higher degree.
Why normalize the eigenvectors when computing SVD and what happens if you do not normalize the eigenvectors?
Another way of phasing the singular value problem is this: suppose we have a linear map $A \colon X \to Y$ between finite-dimensional inner product spaces $V$ and $W$. We'll say that $x_1, \ldots, x_n \in X$ are a singular basis with respect to $A$ if there exist scalars $\lambda_1 \geq \ldots \geq \lambda_n \geq 0$ such that: The $x_i$ are pairwise orthgonal, i.e. we have $\langle x_i, x_j \rangle = 0$ for $i \neq j$, The images of the $v_i$ are pairwise orthogonal, i.e. we have $\langle Ax_i, Ax_j \rangle = 0$ for $i \neq j$, and The operator $A$ "acts with norm $\lambda_i$" along the vector $v_i$, i.e. we have $\lVert Ax_i \rVert = \lambda_i \lVert x_i \rVert$ for all $i$. Furthermore, you can find that a singular value decomposition exists by getting the $x_i$ as eigenvectors of $A^* A$, and that it is fairly unique (there is some wiggle room for singular vectors with equal singular values). Let us furthermore let $y_1, \ldots, y_m$ be a basis of $Y$ such that $y_i = A x_i$ for all positive singular values $\lambda_i > 0$. With respect to the bases $(x_i)$ and $(y_i)$, the linear transformation $A$ looks like a diagonal matrix where the $(i, i)$ entry is $1$ if $\lambda_i > 0$, and $0$ otherwise. So while we can define the singular values just fine with unnormalised bases, they don't show up nicely in the matrix factorisation. On the other hand if we normalise the vectors, then both $y_i = A x_i$ and $\lVert A \widehat{x}_i \rVert = \lambda_i$ force that $A \widehat{x}_i = \lambda_i \widehat{y}_i$, and so with respect to these orthonormal bases the operator $A$ looks diagonal, with the singular values down the diagonal. In summary, you can still do a decomposition $A = U \Sigma V^{-1}$ where $U$ and $V$ have orthogonal columns and $\Sigma$ is diagonal, but the singular values will not appear in $\Sigma$: all you can really tell directly from such a diagonal matrix is the rank of $A$.
The I-projection of a distribution to a family of distributions
If we let $\mathcal{L}_P \subset \mathcal{L}$ be the family of joint distributions on $X \times X$ with marginals $P$, then from the equality you state we can notice that for any $\widetilde{P} \in \mathcal{L}_P$, $$ D( \widetilde{P}\|Q_1 \times Q_2) = D(\widetilde{P}\|P\times P) + D(P\|Q_1) + D(P\|Q_2) \\ \ge D(P\|Q_1) + D(P\|Q_2).$$ Further, $\widetilde{P} = P \times P \in \mathcal{L}_P$ and satisfies the above with equality. Immediately, we have that the minimiser of $ D(\widetilde{P} \|Q_1 \times Q_2)$ in $\mathcal{L}_P$ must be the product distribution $P \times P$. But this is true of all $P$, and $\mathcal{L} = \bigcup_{P} \mathcal{L}_P$, and so the minimiser of $D(\widetilde{P} \|Q_1 \times Q_2)$ over $\mathcal{L}$ is also a product distribution. That's the first part done - you'd almost got there. Let's do the second part. We want to minimise $$ f(P) := D(P\|Q_1) + D(P\|Q_2).$$ I'll do this for discrete $X$, but the argument generalises trivially. $$ f(P) = \sum_x P(x) \log \frac{P(x)^2}{Q_1(x) Q_2(x)} = 2 \sum_x P(x) \log \frac{P(x)}{\sqrt{Q_1(x) Q_2(x)}}$$ Define $R(x) = \frac{\sqrt{Q_1(x) Q_2(x)}}{ Z}$ where $Z = \sum_x \sqrt{Q_1(x) Q_2(x)}$. Notice that $R$ is a distribution. We can further write $$ f(P) = 2\sum_x P(x) \log \frac{P(x)}{ZR(x)} = -2\log Z + 2D(P\|R).$$ Now, the first term $-2\log Z$ in the above is a constant - it depends on $(Q_1, Q_2),$ but not on our decision variable $P$. The second term is a KL divergence, so it's non-negative. In particular it's minimised at $P = R$, and we're done.
Evaluate: $C_0+\frac{C_1}2+\frac{C_2}3+\cdots\frac{C_n}{n+1}$
HINT: $$(1+x)^n=\sum_{r=0}^n\binom nr x^r$$ $$\int_0^1(1+x)^n=\sum_{r=0}^n\binom nr\int_0^1x^r$$
Help with the probability generating function for a conditional distribution.
I would prefer your earlier step to be written like $$g_{X}^{\,}(t) = E_{X}^{\,}[t^{X}]= E_N^{\,}[E_{X}^{\,}[t^{X}\mid N=n]] = E_N^{\,}[(q+pt)^{N}]$$ and, since the generating function for $N$ is $g_{N}^{\,}(s) =E_N^{\,}[s^{N}]$, you can let $s=q+pt$ and thus $$g_{N}^{\,}(q+pt) =E_N^{\,}[(q+pt)^{N}]$$ implying $$g_{X}^{\,}(t) = g_{N}^{\,}(q+pt)$$
Looking for simple, continuous differentiable decreasing function example that satisfies several conditions
Yes. We get: $$ \omega'(x) = \frac{\left(\int_x^1 c(t)dt\right)[c(x) + xc'(x)] + xc(x)^2}{\left(\int_x^1c(t)dt\right)^2} $$ Suppose we want $\omega'(1/2)<0$. So we want: $$ \left(\int_{1/2}^1 c(t)dt\right)[c(1/2) + (1/2)c'(1/2)] + (1/2)c(1/2)^2 < 0 \quad (Eq *)$$ So just define $c(x)$ to have a very large (negative) derivative at 1/2, but that lasts for a short time only, so the integral $\int_{1/2}^1 c(t)dt$ is still significant. For example fix a small $\delta>0$ and define $c(x)$ so its derivative is: $$ c'(x) = \left\{\begin{array}{cc} -1 & \mbox{if $x \in [0, 1/2-\delta]$}\\ \mbox{tent(x)} & \mbox{if $x \in [1/2-\delta, 1/2+\delta]$} \\ -1 & \mbox{if $x \in [1/2+\delta, 1]$} \end{array}\right. $$ where $tent(x)$ is a "tent" function such that $tent(1/2-\delta)=tent(1/2+\delta) = -1$ and $tent(1/2) = -M$ for some large value of $M$. You can choose $\delta$ much smaller than $1/M$ so the affect of the tent does not dramatically change the $c$ values for any of the terms in (Eq *) except for $c'(1/2)$. For example if $c(0)=10$ and $\delta$ is very small, then $c(x) \approx 10-x$ for all $x \in [0,1]$, whereas $c'(x)=-M$, which is as negative as we like, so (Eq *) is easy to satisfy.
Random vector $(X,Y)$ X and Y expectation value and standard deviation individually.
We can use the law of total expectation (there are varying names for this): $$ E(X) = \sum_x x P(X = x) = \sum_{x,y} x P(X = x, Y = y). $$ From this, you can input the values of $x$ and $y$, and do the sum. For finding the variance (and hence standard deviation), try to think of how you can write it as a sum like I have. You've already got them in a nice form to work with. :)
Probability that there are exactly two wrongly addressed envelopes
If two randomly chosen envelopes are wrongly addressed, then the other two envelopes can only be correctly addressed if the two persons who receive the randomly chosen envelopes can get the letter that is meant for them by switching. Person $A$ must have received the letter for person $B$ and vice versa. If not then person $C$ or $D$ will receive a letter meant for $A$ or $B$ and more than $2$ letters are wrongly addressed. Let's say that order $\left(1,2,3,4\right)$ stands for a fully correct addressing. The first two letters are wrongly addressed if we are dealing with for instance $\left(2,4,.,.\right)$. Counting tells you that there are $14$ of such cases. The first two letters are 'switched' in the cases $\left(2,1,3,4\right)$ and $\left(2,1,4,3\right)$ and in only case $\left(2,1,3,4\right)$ there are exactly $2$ wrongly addressed letters. If the first two letters are not switched then there are more than $2$ wrongly addressed letters as argued above. We find a probability of $\frac{1}{14}$ that there are exactly $2$ letters are wrongly addressed under condition that the first two are wrongly addressed.
How do you create an operation for circular indices of a vector?
Why do you want this to be an vector? Make a function $$f(x)=65 + \left((x - 65 )\bmod 26\right)$$ You can see, if $65 \leq a \leq 90$, you have $f(a)=a$, but for example $f(91)=65$, and so on.
Vortex Voronoi diagram?
Edit: It appears an identical idea has, with far greater detail, already been given to you by jvkersch. I am humbled. I should also point out that my example below, which was only meant as an illustration, would not be a steady-state solution in a physical fluid, because the interaction between the vortices themselves would cause them to move. David Bar Moshe's idea reminded me of some work in vector field visualization which does indeed use stagnation points to divide the fluid domain into something like "regions of influence". I believe the initial paper which introduced the idea was Helman and Hesselink's "Visualizing Vector Field Topology in Fluid Flows" (PDF copy). In our case, because we assumed that the flow is incompressible and irrotational, in a generic configuration the velocity can only be zero at saddle points, where the flow points inward along two directions and outward along two directions. Streamlines along these directions are called the separatrices of the saddle point. If you place two particles close together on different sides of a separatrix and let them following the flow, they will diverge at the saddle point and follow disparate long-term trajectories. So these separatrices divide space into regions where the global topology of the streamlines is different. Here's an example I cooked up in Matlab because I thought it would look pretty. There are four point vortices in a square, whose circulations are -1, -1, -2 and 1 going clockwise from top left. Here's what the direction of the velocity field looks like: In the diagram below, the separatrices divide space into regions around vortices (and clusters thereof). In each distinct region, the streamlines wind around a particular set of vortices. You can see the saddle points as the points where four arcs come together. I believe this sort of diagram is what you were hoping for when you asked for something like a Voronoi diagram around the vortices. (For a complete picture, there ought to be arrows on the separatrices to indicate the direction of flow, but I couldn't figure out how to do that in Matlab.)
If $X$ and $Y$ are not independent, is $E\left(X^2Y\right) = E\left(X^2E\left(Y|X\right)\right)$?
Assuming the expectations exist, $E[X^2 Y \mid X=x] = E[x^2 Y \mid X=x]= x^2 E[Y \mid X=x]$ so $E[X^2 Y \mid X] = X^2 E[Y \mid X]$ and $E\left[E[X^2 Y \mid X]\right] = E\left[X^2 E[Y \mid X]\right]$
Countable intersection of arbitrary dense subsets
First, in a Baire space the countable intersection of open dense subsets is eben dense, not just non-empty (see wiki, e.g.). There is a notion of resolvable spaces of which there are quite a lot. In such spaces we can write $X$ as a union of two disjoint dense subsets (like irrationals and rationals for the reals), and so there this property certainly fails. If a space has at least one isolated point $p$, then this $p$ will be in all dense subsets, so also in their intersection (if you only care about non-emptiness), and many spaces without isolated points (all locally compact ones, and all metrisable ones too, I think). But I'm not aware of a lot of theory on this.
If $a,b$ are irrational numbers, Is $K=[a,b] \cap \mathbb Q$ closed in $\mathbb Q$?
To say that a subset $K \subset \mathbb Q$ is closed in $\mathbb Q$ means that for every sequence $(x_i)$ in $K$, if $(x_i)$ converges to a limit $L$ which is in $\mathbb Q$ then $L$ is in $K$. Now, you have found a sequence $(x_i)$ in $K$ that converges to a limit $L$ which is in $\mathbb R$ and is not in $K$. However, this is not a contradiction to the statement that $K$ is closed in $\mathbb Q$. To find a contradiction, you would have needed to find a sequence $(x_i)$ in $K$ that converges to a limit $L$ which is in $\mathbb Q$ and is not in $K$, and you have not done that.
Looking for $H$ such that $H'DH=\sigma_1 I$ and $H'H=\sigma_2 I$, $D$ is a diagonal matrix.
I believe it is easier to first show when H exists for the second requirement, and then show if/when D exists for the first requirement given that H exists. All this depends upon different cases for the value of ${\sigma _2}$ and the relative sizes of m and n, so I parse the answer along those lines. If ${\sigma _2} = 0$, then necessarily $H = \underline {\overline {\bf{0}} }$ for any values of m and n; then D exists only if ${\sigma _1} = 0$, and in that case, it can be any diagonal matrix. If ${\sigma _2} \ne 0$ and $m < n$, then no H which satisfies the second requirement exists. The second requirement effectively requires H to be partial orthonormal, and no partial orthonormal matrix exists when rows have been deleted (i.e., $m < n$) If ${\sigma _2} > 0$ and $m \ge n$, then H exists; any orthonormal matrix for which the requisite number of columns has been deleted and has then scaled by $\sqrt {{\sigma _2}} $ is a candidate. However, singular value decomposition then insures that the only possible value of D is $\left( {\frac{{{\sigma _1}}}{{{\sigma _2}}}} \right)I$. Any diagonal matrix other than this particular choice cannot result in ${\sigma _2}I$. In general, H will have no rows consisting of all zeros. In special cases, H may have up to $m - n$ rows consisting of all zeros; for any such row, the associated value in D may be arbitrarily chosen. If ${\sigma _2} < 0$ and $m \ge n$, then H will exist but it will necessarily be complex. I believe only the same scaled D then exists (but I haven’t thought carefully about how all the different possible square roots of $ - I$ will play through the first requirement). Note that in the first and third cases above, the arbitrary portions of D need not be diagonal.
Problem understanding the subspace topology of a given topology
$(\tau^Y)_Z\subset\tau^{Y \cap Z}$, but we couldn't say $\tau^{Y \cap Z}\subset (\tau^Y)_Z$ Now take any $Z\cap U\in (\tau^Y)_Z$, then $(Y\cap Z)\subset (Z\cap U) $ since $Y\subset U$. Then $Z\cap U\in\tau^{Y \cap Z}$.So $(\tau^Y)_Z\subset\tau^{Y \cap Z}$. Now say $Z\subsetneqq X$ and since $(Y\cap Z)\subset X $ we can take $X\in \tau^{Y \cap Z}$. Since $A\subset Z$ for all $A\in (\tau^Y)_Z$, but $X\not\subset Z$. So $X\notin (\tau^Y)_Z$. $\tau^Y=\{A:Y\subset A\}\cup \{\emptyset\}\Rightarrow (\tau^Y)_Z=\{U\cap Z:U\in \tau^Y\}=\{A\cap Z:Y\subset A\}\cup \{\emptyset\}$ Now if $ Y\cap Z=Z$, then $Z\subset Y$. So if $Y\subset A$, then $Z\subset Y\subset A$ and $Z\cap A=Z$. Then $(\tau^Y)_Z=\{A\cap Z:Y\subset A\}\cup \{\emptyset\}=\{Z,\emptyset\}$
If $[X,Y]=0$ then $X=0$
Suppose $[X,Y] = 0$ for all vector fields $Y$. Then, for all $f \in \mathcal{C}^{\infty}(M)$ and all $Y$, we have $0=[X,fY] = \left(X\cdot f\right) Y + f [X,Y] = \left(X \cdot f\right) Y$. Fix a point $p \in M$ and choose a vector field that does not vanish in a neighbourhood of $p$, say $Y$. Then for all function $f$, we have $(X\cdot f) Y= 0$. Thus in a neighbourhood of $p$, we have $X\cdot f = 0$. Note that this is true for all $p$. For now, we have shown that $X$ is a vector field satisfying $$ \forall f \in \mathcal{C}^{\infty}(M),~ X\cdot f = 0. $$ The exercice now reduces to show that there is a unique vector field $X$ that acts trivially on the set $\mathcal{C}^{\infty}(M)$, that is the zero vector field. If you know the different equivalent definitions for vector fields, this should be pretty straightforward from there.
Fitting a hyperboloid to 3 different radii
After some further thought on the problem, I found my solution. I share it here for posterity. My initial attempts failed for a couple reasons. One, I was trying too late at night and operating below nominal mental capacity and, two, I failed to take into account the fact that a solution involving a square may have two solutions (±). We can derive a solution thusly, using the names shown above, with one change. I realized, I'd used $b$ twice, once as a constant in the $r(y)$ function and the other as a result of the $r(y)$ function. Because the constant in the $r(y)$ function is easily determine and to avoid confusion I'm going to rewrite the function $r(y)$ as and drop any reference to that value: $$r(y)=\sqrt{ay^2+w^2}$$ We really need to resolve $a$, but $a$ is going to be defined in terms of three points on the hyperbola $(w, 0)$, $(b, y_b)$, and $(t, y_t)$. We don't know the values for $y_b$ or $y_t$. We can get them in terms of $m$ or we can get $m$ in terms of either of them, but that's not enough information to solve for any of these three values on their own. However, we can bridge that gap using $h$. First we solve for $y_b$ by plugging the point into the function: $$b=\sqrt{my_b^2+w^2}$$ solving for $y_b$ yields: $$y_b=-\sqrt{\frac{b^2-w^2}{m}}$$ We know we want a negative $y_b$, so we will ignore the positive possible value. Similarly, solving for $y_t$ gets us: $$y_t=\sqrt{\frac{t^2-w^2}{m}}$$ Again, we know we want a positive $y_t$, so we will only worry about the positive possible value here. We know these are related by $h$ like this: $$h=y_t-y_b$$ If we plug our solution in we get: $$h=\sqrt{\frac{t^2-w^2}{m}}+\sqrt{\frac{b^2-w^2}{m}}$$ We know we want $y_t$ to be positive and $y_b$ to be negative. We can then solve for m to give us: $$m=\frac{(\sqrt{t^2-w^2}+\sqrt{b^2-w^2})^2}{h^2}$$ With that, we have a final solution of: $$r(y)=\sqrt{\frac{(\sqrt{t^2-w^2}+\sqrt{b^2-w^2})^2}{h^2}y^2+w^2}$$
Is it possible to find $n-1$ consecutive composite integers
Try the numbers: $$ k!+2,\,k!+3,\ldots,k!+k $$ and you have $k-1$ consecutive composite integers.
The density of the fractional part of $x^n$ in $(0,1)$?
The most reasonable thing I could easily show is: The set of such $x$ is dense in $\Bbb R\setminus[-1,1]$, and the complement is also dense (at least in $\Bbb R\setminus[-2,2]$)! I'll elaborate on $x>1$ only. If $x,y\in \Bbb R$, write $x\equiv y$ as shorthand for $x-y\in\Bbb Z$ (i.e., $x$ and $y$ have same fractional part). If $x\in\Bbb R$ and $A\subseteq \Bbb R$, let us write $x\dot\in A$ as abbreviation for: There exists $y\in A$ with $x\equiv y$. Lemma. Let $[a_1,b_1],[a_2,b_2],\ldots$ be a sequence of closed intervals $\subset \Bbb R$. Let $q>1$ and $d>0$. Let $0<n_1<n_2<\ldots$ be a strictly increasing sequence of positive integers such that $q^{n_{k+1}-n_k}(b_k-a_k)\ge 2$ for $k\ge 1$ and $q^{n_1-1}d\ge 2$. Then for every interval $[a,b]$ with $q\le a$ and $b-a\ge d$, there exists $x\in[a,b]$ with $x^{n_k}\dot\in[a_k,b_k]$ for all $k$. Proof. Note that $x\dot\in[u,v]$ is trivially true if $v\ge u+1$. Hence we may assume wlog that $b_k<a_k+1$ for all $k$. Let $I_0=[a,b]$. Then $b^{n_1}-a^{n_1}>a^{n_1-1}(b-a)\ge q^{n_1-1}d\ge 2$. Then there exists $m\in\Bbb Z$ with $a^{n_1}< a_1+m<b_2+m< b^{n_1}$. Let $I_1=[\sqrt[n_1]{a_1+m},\sqrt[n_1]{b_1+m}]$. Then $I_1\subsetneq I_0$ and $x^{n_1}\dot\in[a_1,b_1]$ for all $x\in I_1$ and for each $y\in [a_1,b_1]$ there exists $x\in I_1$ with $x^{n_1}\equiv y$. Assume we have found $I_0\supsetneq I_1\supsetneq \ldots \supsetneq I_k$ with the following properties: $x^{n_j}\dot\in [a_j,b_j]$ for all $1\le j\le k$ and $x\in I_k$ for each $y\in[a_k,b_k]$ there exists $x\in I_k$ with $x^{n_k}\equiv y$. Let $I_k=[u,v]$. Then $v>u\ge q$ and so $$v^{n_{k+1}}-u^{n_{k+1}}>u^{n_{k+1}-v_k}(v^{n_k}-u^{n_k})\ge q^{n_{k+1}-n_k}(b_k-a_k)\ge 2.$$ Again, we find $m\in\Bbb Z$ with $u^{n_{k+1}}<a_{k+1}+m<b_{k+1}+m<v^{n-{k+1}}$. By letting $I_{k+1}=[\sqrt[n_{k+1}]{a_{k+1}+m},\sqrt[n_{k+1}]{b_{k+1}+m}]$ we extend our nested sequence and still have the properties above. Then $\bigcap_{k\in\Bbb N}I_k$ is not empty and for each $x\in\bigcap_{k\in\Bbb N}I_k$ we have $x^{n_j}\dot\in[a_j,b_j]$ as desired. $\square$ Using the lemma and e.g. $n_k=k^2$ we can easily prescribe the intervals $[a_k,b_k]$ in such a way that the fractional part of $x^{k^2}$ follows a sequence that is dense in $[0,1]$. At least for $a>2$, we can use the lemma and $n_k=N+k$ for $N$ big enough to find $x$ such that $x^n\dot\in[0,\frac2a]$ for almost all $n$.
Under what circumstances can $ALA^{-1}+BLB^{-1}=2 XLX^{-1}$ be solved for the linear operator $X$ only depending on $A$ and $B$ but not on $L$?
For any fixed $k \neq 2$, there does not exist any $X \in GL(n,\mathbb{C})$ such that $$ A L A^{-1} + B L B^{-1} = k X L X^{-1}$$ for all $L \in M_n(\mathbb{C})$, for if there were, then taking $L=1_n$ yields $$ 0 = A 1_n A^{-1} + B 1_n B^{-1} - k X 1_n X^{-1} = (2-k) 1_n,$$ which is impossible. In particular (unless I've misunderstood something), no such $X$ exists in your case, which is $k = \tfrac{1}{2} \neq 2$. EDIT: In the case that $k = 2$, since every algebra automorphism of $M_n(\mathbb{C})$ is inner, and since $ALA^{-1} + BLB^{-1} = A(L + CLC^{-1})A^{-1}$ for $C = A^{-1}B$, your problem is equivalent to finding all $C \in GL(n,F)$ such that $L \mapsto \tfrac{1}{2}(L + CLC^{-1})$ is an algebra automorphism of $M_n(\mathbb{C})$. In the case that $C^2=1_n$, then you can check that this is true if and only if $C$ is a non-zero scalar multiple of the identity, in which case $\tfrac{1}{2}(L+CLC^{-1}) = L$, so that $B$ must be a non-zero scalar multiple of $A$, and you can just take $X=A$. Anything more general looks like a bit of a mess, though perhaps you can compute something explicit for $n=2$?
Are all $*$-algebras with matrix representations projective?
This is trivial from the universal property of the $C^*$-enveloping algebra: $M_d(\mathbb{C})$ is a $C^*$-algebra, so any homomorphism to it factors through the $C^*$-enveloping algebra. This doesn't have anything to do with being projective, though. For a simple example, let $\mathscr{A}=M_2(\mathbb{C})\times M_3(\mathbb{C})$. This has matrix representations, but it is not projective: there is an epimorphism $\mathscr{A}\to M_2(\mathbb{C})$ (the first projection) which does not split since there are no homomorphisms $M_2(\mathbb{C})\to M_3(\mathbb{C})$.
How to distinguish a connected set or a disconnected set?
First note that $A$ and $B$ are both arcwise connected; in fact, $B$ is an arc. Next, note that the only point of $B$ that really matters is the origin: $A\cup B$ is connected if and only if $A\cup\{\langle 0,0\rangle\}$ is connected, and similarly for arcwise connectedness. Now use the answers to this question and this question.
Intersection of a circle $x^2+(y-1)^2=1$ and a parabola $ax^2=y$, bounds for a
We can compute very easily the intersections of the two conics, or: $$\frac{y}{a}+(y-1)^2=1 \leftrightarrow ay^2+y(1-2a)=0 \leftrightarrow y(ay+1-2a)=0$$ Now, a solution is $y=0$ that holds for evry $a$. The other solution is $y=\frac{2a-1}{a}$ and so $x=\frac{1}{a}\sqrt{2a-1}$. Now, taking from here, it's very simple. In fact we want the square root to be defined, so: $$2a-1>0 \leftrightarrow a>\frac{1}{2}$$ Note that your error has been committed when stating tehe range of $a$ without considering the fact that $x=\pm\sqrt{\frac{y}{a}}$. In fact, as you stated, if $a\in\left(R-\left\{0,\frac{1}{2}\right)\right)$, then for example $a=-1$ would be OK, but if we plug into the equation for $x$ we have a contraddiction. In fact: $$y(-y+3)=0 \leftrightarrow y=0\vee y=3$$ But now: $$x=\pm\sqrt{\frac{y}{a}}=\pm\sqrt{\frac{3}{-1}}=\text{IMPOSSIBLE}$$
$A$ is Hermitian, $B$ is leading principal submatrix of $A$, $rank B = rank A$. Why does $A$ is positive semidefinite?
The hermitian form $a$ associated with $A$ has a diagonalizing basis $\mathbf{b}$, so the matrix which rapresents the form can be considered diagonal. The rank is $n-1$ so we can assume (if needed, we can interchange the basis vectors order $\mathbf{b}_{i}$ to achieve this) the last row of the matrix as zero vector $\mathbf{0}$ (one row must be of zeros coordinates becouse there are $n-1$ independent rows in the matrix by hypothesis, and the matrix is diagonal, so one line must be linear combination of the remaining $n-1$). $\qquad \qquad \qquad \qquad \qquad A=$$\begin{bmatrix} b_{11} & 0 & \cdots & 0 & 0 \\0 & b_{22} & \cdots & 0 & 0\\ \vdots & \vdots & \vdots &\vdots & \vdots \\ 0 & 0 & \cdots & b_{n-1 n-1} &0 \\ 0 & 0 & \cdots & 0 & 0 \end{bmatrix}$ The matrix $B$ corresponding to the form $b$ is the diagonal submatrix of $A$ with the $b_{ii}$ as diagonal elements. The hermitian form $a(\mathbf{v},\mathbf{w})$ can be expressed as $\qquad \qquad \sum_{i,j} x_i \bar y_ja(\mathbf{b}_i,\mathbf{b}_j) = \ ^\text{t}\mathbf{x}A \mathbf {\bar y}$ ($\mathbf{x}$ and $\mathbf{y}$ are respectively the $\mathbf{v}$ and $\mathbf{w}$ coordinate vectors) So we have to test that $\sum_{i,j} x_i \bar x_ja(\mathbf{b}_i,\mathbf{b}_j) = \ ^\text{t}\mathbf{x}A \mathbf {\bar x}\ge 0$. This is an immediate consequence of the hypothesis that $B$ is positive semidefinite. (To fix ideas I assumed rank $= n-1$, but the matter obviously remains substantially unchanged by assuming any rank - some $b_{ii}$ will be now $0$.)
Induced maps in homology are injective
No, this does not need to be the case. Let $X$ be any space whose fundamental group is perfect (i.e. such that its abelianization is zero), for example the classifying space $X = BA_5$ of the alternating group $A_5$. Let $H \subset A_5$ be the subgroup generated by a transposition, for example, $H = \{ 1, \sigma \}$ with $\sigma = (1 \; 2)$. The group $H$ is isomorphic to $\mathbb{Z}/2\mathbb{Z}$, whose abelianization is itself. Then by the classification of covering spaces, there exists a covering space $p : \tilde{X} \to X$ such that $\pi_1(\tilde{X}) = H$ and $p$ induces the inclusion $H \subset A_5$ on fundamental groups. Now on the level of homology, $H_1(\tilde{X}) = H_{\mathrm{ab}} = H$, whereas $H_1(X) = (A_5)_{\mathrm{ab}} = 0$. Thus the induced map $H_1(\tilde{X}) \to H_1(X)$ cannot be a monomorphism. Here is another example that doesn't use classifying spaces. Let $X = S^1 \vee S^1$ be the figure-eight space, the wedge sum of two circles. Its fundamental group is the free group on two generators, $$\pi_1(X) = \mathbb{Z} * \mathbb{Z} = \langle a,b \rangle.$$ Consider the subgroup $H$ generated by $g = aba^{-1}b^{-1}$. Since $g$ has infinite order, you get $H \cong \mathbb{Z}$. Using the classification of covering spaces, there exists a covering space $p : \tilde{X} \to X$ such that $\pi_1(\tilde{X}) = H = \mathbb{Z}$ and $p$ induces the inclusion $H \subset \pi_1(X)$ on fundamental groups. Now, $H$ is abelian, so $H_1(\tilde{X}) = H = \mathbb{Z}$, with generator $[g]$. On the other hand, $H_1(X) = \mathbb{Z} \oplus \mathbb{Z}$ is the direct sum of two copies of $\mathbb{Z}$, with generators $[a]$ and $[b]$. The morphism $p$ induces on $H_1$: $$p_*([g]) = [a] + [b] - [a] - [b] = 0$$ and so is the zero morphism on homology, which is not injective.
Stokes' Theorem Different Representations? $\iint_S (\nabla \times \mathbf{F}) \cdot \hat{n} \ dS$, $\iint_S ( \nabla \times \mathbf{F} ) \cdot \ dS$?
Physicists like to use the notation $\operatorname d \vec S$, i.e. they assign a direction to the area element, so it becomes a vector and the scalar product is "well-defined". This direction is simply the unit normal. Hence $\operatorname d \vec S= \hat n \operatorname d S$. I guesse some authors simply ommit the arrows as usual in the mathematics literature. In summary, this are equivalent notations.
Closed Form for the integral $\int_0^1 \left(\frac{e^{2 x-2}x}{1-x^2}-\frac{e^{3 x-3} x}{1-x^2}\right)\,dx$
I see no interesting property of the integral. You may begin by noticing that $$ \int \frac{e^{ax}}{x+b} \, dx = e^{-ab} \operatorname{Ei}(a(x+b)), $$ where $\operatorname{Ei}$ is the exponential integral. With aid of partial fraction decoposition, you should be able to compute $$ \int_{0}^{1} \frac{(e^{2x-2} - e^{3x-3}) x}{1-x^2} \, dx = \lim_{r \to 1^-} \frac{1}{2} \int_{0}^{r} (e^{2x-2} - e^{3x-3})\left( \frac{1}{1-x} - \frac{1}{1+x} \right) \, dx. $$
Clarification of a proof as written in Margaris's book
Answers In the second step "SC,1" means that the following, $$(\forall v{\sim}P\to{\sim}P(t/v))\to (P(t/v)\to{\sim}\forall v{\sim}P)$$ is a tautology. Hint: If ${\sim}P$ doesn't admit $t$ for $v$, then there exists at least one variable $u$ in $t$ such that it occurs in a subformula of the form $\mathtt{Q}uM$ of ${\sim}P$ where $\mathtt{Q}$ is a quantifier and $v$ is free in $M$ (why?).
induced bi-regular subgraph of a bi-regular bipartite graph
It is easy to deal with this problem in terms of covers of $V_1$. Namely, for each vertex $v_2\in V_2$ a set $N(v_2)$ of its neighbors is a subset of $V_1$ of size $n$. Each vertex $v_1\in V_1$ is covered by a family $\mathcal U=\{N(v):v_2\in V_2\}$ exactly $m$ times. Given $k<m$ such that $kn$ is divisible by $r=|V_1|$ we want to find a subfamily $\mathcal V$ of a given $n$-uniform cover $\mathcal U$, which covers each vertex of $V_1$ exactly $k$ times. This is not always possible. For instance, recently Andriy Yurchyshyn, solving an other problem, for $r\in\{6,10,14,18\}$ constructed a family $\mathcal U$ of subsets of a set $[r]=\{1,\dots,r\}$ of size $n=r/2$ each, such that each element $v_1\in [r]$ is covered exactly $m=\tfrac 14 {r \choose r/2}$ times and $\mathcal U$ contains no subfamily $\mathcal V$ which covers each element $v_1\in [r]$ exactly $k=1$ times. The family $\mathcal U$ for $r=6$ is 124 235 346 451 562 613 123 345 561 246 The following subfamilies of $\mathcal U$ also produce counterexamples: 124 235 346 451 562 613 and 123 345 561 246