title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Is this a concurrency?
According to my sketch in geogebra, if $X$ is the intersection of $MN$ with $BG$ then the angle between $CX$ and $FG$ is $\approx 89.45844358274003^\circ$. Hence there is no concurrency.
Discrete math problem confusion.
In order to construct a function $f:A\to B$ with $|f[A]|=4$, you must first choose $4$ elements of $B$ to be the range of $f$; this can be done in $\binom74$ ways. Once you’ve chosen a set $S$ of four target elements, there are $4^{10}$ from $A$ to $S$. Unfortunately, that figure includes a lot of functions that you don’t want, because they map $A$ to a proper subset of $S$. You need to subtract those functions that map to at most $3$ elements of $S$. There are $3^{10}$ functions from $A$ to any $3$-element subset of $S$, and $S$ has $\binom43=4$ $3$-element subsets, so you have $4\cdot3^{10}$ unwanted functions included in the original figure of $4^{10}$. Thus, a second approximation to the desired result is $$4^{10}-4\cdot3^{10}\;.\tag{1}$$ Unfortunately, if $S=\{s_1,s_2,s_3,s_4\}$, say, a function from $A$ to $\{s_1,s_2\}$ will be counted once in the term $4^{10}$ and twice in the term $4\cdot3^{10}$, once as a function from $A$ to $\{s_1,s_2,s_3\}$ and once as a function from $A$ to $\{s_1,s_2,s_4\}$. Thus, $(1)$ counts such a function $-1$ times instead of the correct $0$ times. You’ll have to add such functions back in. I’ll let you try to finish the job; what you’re using here is an inclusion-exclusion argument.
Hölders inequality doubt exercise
I've inserted the extra step: $$\int_X (\lvert f\rvert^{p_2})^{1/\alpha}\mathbf{1}_X\,\mathrm{d}\mu\leq \left(\int_X \left((\lvert f\rvert^{p_2})^{1/\alpha}\right)^{\alpha}\,\mathrm{d}\mu\right)^{1/\alpha}\left(\int_X \lvert \mathbf{1}_X\rvert^{\beta}\,\mathrm{d}\mu\right)^{1/\beta} = \left(\int_X \lvert f\rvert^{p_2}\,\mathrm{d}\mu\right)^{1/\alpha}\left(\int_X \lvert \mathbf{1}_X\rvert^{\beta}\,\mathrm{d}\mu\right)^{1/\beta}$$ Essentially, the $1/\alpha$ is cancelling out with the $\alpha$ in the first integrand.
Show that $e^{\frac{\pi}{2}i }= i$ - Problem to understand a certain convergence
You know that $ \sin(x) \in \mathbb{R}$, so that $$\left\vert \sin \left( \frac{\pi}{2} \right) \right\vert=1 \Rightarrow \sin \left( \frac{\pi}{2} \right)= \pm1$$ If $\sin'(x)=\cos(x) >0$ for all $x \in [0,\frac{\pi}{2})$, then this means that the sinus function is strictly growing on $x \in [0,\frac{\pi}{2})$. With $\sin(0)=0$, with $\sin$ being an continuous function, then the only possibility is $$\sin \left( \frac{\pi}{2} \right)=+1$$ Hence $$e^{i\frac{\pi}{2}} = i$$ Edit If you want to proove that $\sin'(x)=\cos(x)$, so note that $$\frac{d}{dx}\left( e^{ix} \right) = ie^{ix} = i\cos(x) - sin(x)$$ Implies that $$ \cos'(x) + i\sin'(x) = -\sin(x) + i\cos(x)$$ Prooving that $\sin'(x)=\cos(x)$
Power series differentiable at endpoints?
You cannot say anything about the convergence at $\pm R$. For example take $R=1$ and consider $ \sum \frac { {x^{n}+(-x)^{n}}} n$. To say anything about continuity, differentiability etc at $\pm R$ you have to assume that the radius of convergence exceeds $1$.
Collection of measurable sets closed under countable unions implies existence of a set with maximal measure
Let $s$ be the supremum of the $\mu$-measures of members of $\mathcal G$. By definition of supremum, for each $n$, there is $G_n\in\mathcal G$ with $\mu(G_n)>s-1/n$. Letting $G=\bigcup_n G_n$, then $G\in \mathcal G$ since $\mathcal G$ is closed under countable unions, and $\mu(G)=s$, since it is at least $\sup_n\mu(G_n)$ but it is at most $s$ (by definition of $s$).
Primes with $p^9\pm1 = q^4r$
Factoring the L.H.S. we can see that there is no solution. For example, $p^9-1=(p^3-1)(p^6+p^3+1)$ and the two factors on the R.H.S. have gcd $1$ or $3$. The case when the gcd is $3$ forces $r=3$ or $q=3$ which lead to no solution. So the gcd is $1$, which means that if $q=2$ then the smaller factor is $1$ or $16$ which is not possible. Similarly all the other cases lead to no solution.
Making Monic polynomial problem
Hint: Let $x=2^{-1/3}+2^{-2/3},$ then $x^3=\frac12+3\cdot\frac12\cdot x+\frac14$.
Map $n(g,h) = gh^{-1}$ is smooth implies $G$ is a Lie Group.
For completeness: my attempt is correct. (see the comments)
If $|f'(z)| \leq |z|$ for all $z\in \mathbb{C}$ then $f(z) = a + bz^2$ for arbitrary $a,b\in\mathbb{C}$ with $|b| \leq 1$.
You know that your function is entire because $$ |\frac{f'(z)}{z}| \leq 1 $$ is bounded. Since it is bounded and holomorphic around 0, you can extend it to $z=0$ and therefore you have an entire function. It is a "removable singularity", you can find characterisations of it in any textbook or script. As for your other question: It is trivially true that $|b|\leq \frac{1}{2}\leq 1$. You can just plug in the derivative and check that $|bz| \leq |z|$ implies $|b| \leq 1$. In your proof, you had a statement on the derivate, i.e. $f'(z)=cz$. And then you integrated both sides. In the process, you got a factor of $\frac{1}{2}$ which, however, vanishes when you differentiate with respect to $z$. The problem here is that you take a statement/restriction on the derivative and by integrating the statement changed. Look at it that way, just ignore the $|.|$ for a second: $$ f'(z) \leq cz $$ becomes: $$ \int f'(z)=f(z)\leq \frac{1}{2}cz^2=\int cz $$
Find the function $f(x)$
Given $$\displaystyle f(x+y) = e^{x}f(y)+e^{y}f(x)\Rightarrow \frac{f(x+y)}{e^{x+y}} = \frac{f(y)}{e^{y}}+\frac{f(x)}{e^x}$$ Now Put $\displaystyle \frac{f(x)}{e^x} = g(x)\;,$ Then functional equation convert into $\displaystyle g(x+y) = g(x)+g(y)$ So it is a Cauchy functional equation whose solution is $g(x) = cx$ So we get $$\displaystyle \frac{f(x)}{e^x} = cx\Rightarrow f(x) = cxe^x$$ So we get $f'(x) = c\left[xe^x+e^x\right]$ Now given $f'(0) = 1$. So put $x=0$ in above equation, We get $1=c$ So we get $$f(x) = xe^x$$
Closed form for zeros.
I am a bit skeptical about a closed form for the root. There is a simple formula for $t$ $$t(x)=\frac{2 \left(e^{\frac{1}{x^2+1}}-1\right) x}{3-2 e^{\frac{1}{x^2+1}}}$$ but its inversion does not seem possible even using special functions. If we consider the problem for small values of $t$ (that is to say for large values of $x$), by Taylor we have $$t=\frac{2}{x}+\frac{3}{x^3}+\frac{13}{3 x^5}+\frac{77}{12 x^7}+\frac{187}{20 x^9}+O\left(\frac{1}{x^{11}}\right)$$ and using series reversion $$x=\frac{2}{t}+\frac{3 t}{4}-\frac{7 t^3}{24}+\frac{11 t^5}{48}-\frac{163 t^7}{720}+O\left(t^9\right)$$ Trying for $t=\frac 1 {1000}$, this truncated series gives $$x=\frac{1440000539999790000164999837}{720000000000000000000000}$$ which is $$x\approx 2000.00074999970833356249977361111$$ while the "exact" solution is $$x=2000.00074999970833356249977361136$$ For $t=\frac 12$, the approximation gives $$x=\frac{400337}{92160}\approx 4.34393$$ while the exact solution is $x=4.344314$. Edit There is another way to get more and more accurate closed form approximations when $x$ is large. Let $x=\frac 1y$ and consider the function $f(y)=0$. Now, build the $[1,n]$ Padé approximant of it built around $y=0$. It will write $$f(y) \sim \frac {a^{(m)}y -t } {1+\sum_{i=1}^m b_i^{(m)} y)^i}\implies y=\frac t {a^{(m)}} \implies x=\frac {a^{(m)}} t$$ All coefficients $a^{(m)}$ and $b_i^{(m)}$ are defined by the values of function and its derivatives at $y=0$. For the fun of it, I give you the one for $m=9$. It write $$x_{(9)}=\frac{2 \left(8521 t^8+94080 t^6+269280 t^4+276480 t^2+92160\right)}{3t \left(561 t^8+15600 t^6+64000 t^4+80640 t^2+30720 \right)}$$ For $t=\frac 1 {1000}$ it gives a relative error of $7.73 \times 10^{-33}$%.
Proof using the axioms
Clearly $(-a)+a=a+(-a)=0$ since $-a$ is the additive inverse of $a$. Since additive inverses are unique, $a$ must also be the additive inverse of $(-a)$, which is (by the definition of the notation) $-(-a)$
Universal elimination when $\forall$ is not the main operator
Universal elimination when $∀$ is not the main operator. You cannot apply UE when the quantifier is not the main operator. You have to derive $\forall x A(x)$ in order to use it to "detach" $B$ using ($\to$-E): $\forall x A(x) \to B$ --- premise $\lnot \exists x (A(x) \to B)$ --- assumed [a] $\lnot A(y)$ --- assumed [b] $A(y)$ --- assumed [c] $\bot$ $B$ --- from 5) $A(y) \to B$ --- from 4) and 6) by ($\to$-I), discharging [c] $\exists x (A(x) \to B)$ --- from 7) by ($\exists$-I) $\bot$ $\lnot \lnot A(y)$ --- from 3) and 9) by ($\to$-I), discharging [b] $A(y)$ --- from 10) by ($\lnot \lnot$-E) $\forall x A(x)$ --- from 11) by ($\forall$-I) $B$ --- from 1) and 12) by ($\to$-E) $A(y) \to B$ --- from 13) by ($\to$-I) $\exists x (A(x) \to B)$ --- from 14) by ($\exists$-I) $\bot$ $\exists x (A(x) \to B)$ --- from 2) and 16) by ($\to$-I), discharging [a], followed by ($\lnot \lnot$-E)
Cascading Summation (2) $\sum_{i_1\le i_2\le i_3\le \cdots \le i_m}^n \left[\prod_{r=1}^m (i_r+2r-2)\right]/(2m-1)!!$
$$\begin{align} &\sum_{i_1\le i_2\le i_3\le \cdots \le i_m}^n \left[\prod_{r=1}^m (i_r+2r-2)\right]\bigg /(2m-1)!!\\ &=\sum_{i_1=1}^n\sum_{i_2=i_1}^n\sum_{i_3=i_2}^n\cdots\sum_{i_m=i_{m-1}}^n\frac {i_1(i_2+2)(i_3+4)(i_4+6)\cdots(i_m+2m-2)}{1\cdot 3\cdot 5\cdot 7\cdots (2m-1)}\\ &=\sum_{i_m=1}^n\sum_{i_{m-1}=1}^{i_m}\cdots\sum_{i_3=1}^{i_4}\sum_{i_2=1}^{i_3}\sum_{i_1=1}^{i_2}\frac {i_1(i_2+2)(i_3+4)(i_4+6)\cdots(i_m+2m-2)}{1\cdot 3\cdot 5\cdot 7\cdots (2m-1)}\\ &=\sum_{i_m=1}^n\frac{i_m+2m-2}{2m-1}\sum_{i_{m-1}=1}^{i_m}\frac{i_{m-1}+2m-4}{2m-3}\cdots\sum_{i_3=1}^{i_4}\frac{i_3+4}5\sum_{i_2=1}^{i_3}\frac{i_2+2}3\sum_{i_1=1}^{i_2}\frac{i_1}1\\ &=\sum_{i_m=1}^n\frac{i_m+2m-2}{2m-1}\sum_{i_{m-1}=1}^{i_m}\frac{i_{m-1}+2m-4}{2m-3}\cdots\sum_{i_3=1}^{i_4}\frac{i_3+4}5\sum_{i_2=1}^{i_3}\frac{i_2+2}3\binom{i_2+1}2\\ &=\sum_{i_m=1}^n\frac{i_m+2m-2}{2m-1}\sum_{i_{m-1}=1}^{i_m}\frac{i_{m-1}+2m-4}{2m-3}\cdots\sum_{i_3=1}^{i_4}\frac{i_3+4}5\sum_{i_2=1}^{i_3}\binom{i_2+2}3\\ &=\sum_{i_m=1}^n\frac{i_m+2m-2}{2m-1}\sum_{i_{m-1}=1}^{i_m}\frac{i_{m-1}+2m-4}{2m-3}\cdots\sum_{i_3=1}^{i_4}\frac{i_3+4}5\binom{i_3+3}4\\ &=\sum_{i_m=1}^n\frac{i_m+2m-2}{2m-1}\sum_{i_{m-1}=1}^{i_m}\frac{i_{m-1}+2m-4}{2m-3}\cdots\sum_{i_3=1}^{i_4}\binom{i_3+4}5\\ &=\vdots\\ &=\sum_{i_m=1}^n\frac{i_m+2m-2}{2m-1}\sum_{i_{m-1}=1}^{i_m}\frac{i_{m-1}+2m-4}{2m-3}\binom{i_{m-1}+2m-5}{2m-4}\\ &=\sum_{i_m=1}^n\frac{i_m+2m-2}{2m-1}\sum_{i_{m-1}=1}^{i_m}\binom{i_{m-1}+2m-4}{2m-3}\\ &=\sum_{i_m=1}^n\frac{i_m+2m-2}{2m-1}\binom{i_{m}+2m-3}{2m-2}\\ &=\sum_{i_m=1}^n\binom{i_{m}+2m-2}{2m-1}\\ &=\binom{n+2m-1}{2m}\qquad\blacksquare\\ \end{align}$$ NB: for the case where $m=3$, as in the question here, this becomes $$\sum_{i_1=1}^n\sum_{i_2=i_1}^n\sum_{i_3=i_2}^n\frac {i_1(i_2+2)(i_3+4)}{1\cdot 3\cdot 5}=\binom {n+5}6$$
Finding basis for the space spanned by some vectors.
You don't need to guess; just write down the matrix having the vectors as columns: $$\begin{bmatrix} 1 & 2 & 1 & 2 & 3 \\ -2 & -5 & -1 & -1 & 2 \\ 0 & -3 & 3 & 4 & 14 \\ 3 & 6 & 1 & -7 & -17 \end{bmatrix}$$ and proceed with Gaussian elimination; first do $R_2+2R_1$ (sum to the second row the first multiplied by $2$) and then $R_4+(-3)R_1$ to get $$\begin{bmatrix} 1 & 2 & 1 & 2 & 3 \\ 0 & -1 & 1 & 3 & 8 \\ 0 & -3 & 3 & 4 & 14 \\ 0 & 0 & -2 & -13 & -24 \end{bmatrix}$$ I usually do pivot reduction, so multiply the second row by $-1$ and then do $R_3+3R_2$ to get $$\begin{bmatrix} 1 & 2 & 1 & 2 & 3 \\ 0 & 1 & -1 & -3 & -8 \\ 0 & 0 & 0 & -5 & -10 \\ 0 & 0 & -2 & -13 & -24 \end{bmatrix}$$ Now swap the third and fourth rows; if you also do pivot reduction you get $$\begin{bmatrix} 1 & 2 & 1 & 2 & 3 \\ 0 & 1 & -1 & -3 & -8 \\ 0 & 0 & 1 & 13/2 & 12 \\ 0 & 0 & 0 & 1 & 2 \\ \end{bmatrix}$$ Since we have pivots in the first four columns, we conclude that $v_1, v_2, v_3, v_4$ span your subspace. But, of course, since the dimension of the subspace is $4$, it is the whole $\mathbb{R}^4$, so any basis of the space would do. These computations are surely easier than computing the determinant of a $4\times 4$ matrix. Note that if the dimension of the subspace were less than $4$, computing a determinant built with any set of four vectors would lead to nothing, while the elimination always works.
Calculating Posterior Expected Utilities
Close.   You want the expectation; the sum over the support of the product of the random variable and the conditional probability over the measure. $$\begin{align} \mathsf E(\,U(D=1, X \mid Y=1)\,) & =\mathsf E(\,U(D=1, X) \mid Y=1) \\[1ex] & = \sum_{k\in\{x, \neg x\}} U(D=1, k) \;\mathsf P(k\mid Y=1) \\[1ex] & = U(D=1, x) \;\mathsf P(x\mid Y=1)+U(D=1, \neg x) \;\mathsf P(\neg x\mid Y=1) \\[1ex] & = 400\cdot 0.1 + 2\cdot 0.9 \\[1ex] & = 41.8 \end{align}$$
Exercise about the number of subspaces of a certain dimension
This reference (http://math.columbia.edu/~nsnyder/Solutions1.pdf) provides two proofs, one of the same style as in (How to count number of bases and subspaces of a given dimension in a vector space over a finite field?), the other directly linked to the method that you have to use.
Showing a process is a martingale
Note that $\log(\phi(t))$ is not well-defined for $t \in \mathbb{R}$ such that $\phi(t) \leq 0$. Consider for example $X_1 \sim U_{[-1,1]}$, then $\phi(t)= \frac{\sin t}{t}$, thus $\phi(t) \leq 0$ for $t \in [\pi,2\pi]$. To avoid these inconveniences, define $Y_n$ by $$Y_n := e^{\imath \, t \cdot S_n} \cdot \phi(t)^{-n}$$ where $t \in \mathbb{R}$ such that $\phi(t) \not= 0$. For all $t \in \mathbb{R}$ such that $\phi(t) > 0$, this coincides with your definition of $Y_n$ since $$\exp(-n \cdot \log \phi(t)) = \phi(t)^{-n}$$ if $\phi(t)>0$. To prove $(Y_n)_n$ a martingale we use the independence of the random variables $(X_n)_n$: $$\mathbb{E}(Y_n \mid \mathcal{F}_{n-1}) = e^{\imath \, t \cdot S_{n-1}} \cdot \phi(t)^{-n} \cdot \mathbb{E}(e^{\imath \, t \cdot X_n} \mid \mathcal{F}_{n-1}) = e^{\imath \, t \cdot S_{n-1}} \cdot \phi(t)^{-n} \cdot \mathbb{E}(e^{\imath \, t \cdot X_n})$$ Since the random variables $(X_n)_n$ are identically distributed we have $$\mathbb{E}e^{\imath \, t \cdot X_n} = \mathbb{E}e^{\imath \, t \cdot X_1} = \phi(t)$$ Thus $$\mathbb{E}(Y_n \mid \mathcal{F}_{n-1}) = e^{\imath \, t \cdot S_{n-1}} \cdot \phi(t)^{-n} \cdot \phi(t)=Y_{n-1}$$
Prove that the intersection of convex sets is convex using the following three points...
Here is the gist of part 2. Let me know if you have any questions. Let $x_1, x_2 \in C_2$. Then as $g$ convex over $C_2$ this implies that $$g(tx_1 +(1-t)x_2) \le tg(x_1) + (1-t)g(x_2) \le 0$$ as $x_1,x_2\in C_2$ where $t\in [0,1]$ implying that $tx_1 + (1-t)x_2 \in C_2$ and thus $C_2$ is convex. Part 3 is very similar to part 1 and 2. Let $x_1, x_2 \in C_3$ Then for any $g_i$ as $g_i$ convex in $C_3$, $$g_i(tx_1 +(1-t)x_2) \le tg_i(x_1) + (1-t)g_i(x_2) \le 0$$ as $x_1,x_2\in C_2$ where $t\in [0,1]$ Now you need to show the affine part for $x_1,x_2$ ;)
How to Solve an ODE involving three Functions of the Coordinates?
As $x^3 = R$, we have $v_3 = 0$, and so $\frac{dv_3}{dx^3} = 0$. Thus, the equation simplifies to $$\alpha_1v_1+\alpha_2v_2 = 0$$ or $$\alpha_1\frac{dx^1}{dt}+\alpha_2\frac{dx^2}{dt} = 0$$ Integrating, we get - $$\alpha_1x^1+\alpha_2x^2 = C$$ where $C$ is a constant depending on the initial conditions of the differential equation.
How prove or disprove is positive define matrix?
Let $b_{kl}=\frac{\sin{(a_k-a_l)}}{a_k-a_l}$. The symmetric $n\times n$ matrix $M=(b_{kl})$ is positive definite, if and only if $(a_1,\ldots,a_n)$ are distinct. Note that $$\frac{\sin a}{a}=\frac{1}{2}\int_{-1}^1e^{iat}dt$$ Thus, for ${\bf x}=(x_1,\ldots,x_n)\in\Bbb{C}^n$ we have $$\eqalign{ {\bf x}^*M{\bf x}&=\frac{1}{2}\int_{-1}^1\sum_{k,l=1}^nx_k\overline{x_l}e^{i(a_k-a_l)t}dt\cr &=\frac{1}{2}\int_{-1}^1\left\vert\sum_{k=1}^n{x_k}e^{ia_k t}\right\vert^2 dt\geq0\tag{1} } $$ Thus $M$ is positive. Moreover, Suppose that ${\bf x}^*M{\bf x}=0$ we want to prove that $$x_1=x_2=\cdots=x_n=0.$$ From $(1)$ we conclude that the continuous function $t\mapsto \sum_{k=1}^n{x_k}e^{ia_k t}$ is zero on $[-1,1]$. Now, consider the function $f$ defined by $f(z)=\sum_{k=1}^n{x_k}e^{ia_k z}$, this is an analytic function in $\Bbb{C}$ that is equal to $0$ for $z\in[-1,1]$, so it must be identically zero. Consider $j\in\{1,\ldots,n\}$, we have $$ \forall\,t\in \Bbb{R},\quad \sum_{k=1}^nx_ke^{i(a_k-a_j)t}=0 $$ hence, for $T>0$, $$ \sum_{k=1}^nx_k\left(\frac{1}{2T}\int_{-T}^Te^{i(a_k-a_j)t}dt\right)=0 $$ Letting $T$ tend to $\infty$ we conclude that $x_j=0$. Here we use that fact that $$ \lim_{T\to\infty}\frac{1}{2T}\int_{-T}^Te^{i wt}dt=\left\{\matrix{1&w=0\cr 0&w\ne 0}\right. $$ and the announced conclusion follows.
How to find an integer pair x and y after performing Eucilid algorithm?
Hint: $$3745a+1172b=1 \implies 3745(ra)+1172(rb)=r.$$
Showing that a path is a non-trivial element of a fundamental group.
Choose the point $(1,0,0)$ as $\tilde{x_0} \in S^2$. Then $\alpha$ is trivial in $\pi_1(\mathbb R P^2)$ if and only if $\alpha$ fixes $\tilde{x_0}$ under the monodromy action since $p_*(\pi_1(S^2))$ is the stabilizer of $\tilde{x_0}$. As you have noted, it does not fix $\tilde{x_0}$.
Prove that if $A ∩ C ⊆ B$ and $a \in C$ then $a \not \in A\setminus B$
Intersect $A ∩ C ⊆ B$ on both sides by X\B, where X is that universal set thing to get $A \setminus B ∩ C = A ∩ C \setminus B ⊆ \emptyset.$ Proof is now immediate.
Are $i,j,k$ commutative?
We can extend the complex numbers ($a+bi,\ a,b\in\Bbb R$) with further two imaginary units, named $j$ and $k$, and if we pose those equations, we arrive to the quaternions, where commutativity indeed fails by B) and C).
Find Indefinite of root function
HINT: Set $$\sqrt{\frac{x-4}{x+2}}=t, \frac{x-4}{x+2}=t^2,t^2-1=\frac{-6}{x+2}, x+2=\frac6{1-t^2}$$
Conditional probabilitiy in balls in urn
The answer is that if there are n black balls in the urn, and the second draw is a black ball, then we have 10 + (n - 1) balls in the first draw. 10 / (10 + (n - 1)) = 1/3 Which makes n = 21.
Asymptotes of Functions
Because $f(x)$ is a rational function, $f(x) = \frac{g(x)}{h(x)}$ for polynomials $g$ and $h$. Then: $\frac{f(x)}{x-2} = \frac{g(x)}{(x - 2)h(x)}$ Thus, $\frac{f(x)}{x-2}$ is a rational function with a factor of $x - 2$ in the denominator, and it thus is vertical asymptote is: $\boxed{x = 2}$ Note: if $g(x)$ contains a factor of $x - 2$, then $\frac{f(x)}{x-2}$ will only have a hole at $x = 2$.
Non orientable surface of genus $g$
Take j=1, just for clearness. Consider that the segments $(1−t)z_1+tz_2$ and $(1−t)z_2+tz_3$, where $1/3 \leq t \leq 2/3$, are identified point-to-point by the equivalence relation. Draw a neighbourhood of the two segments in the polygon, and join the starting point of the first segment to the correspondinfg point of the second segment remaining in the neighbourhood you have drawn, and do the same for the end points. You should have got a Moebius strip: by definition, a surface that "contains" a Moebius strip is non-orientable (a motivation may be the following: if a surface contains a Moebius strip and is assumed to be smooth, then you can not define a normal smooth vector field on the whole surface). The fact that this surface has a property named "genus" that is equal to $g$ requires, to be justified, to talk about the fundamental group of a surface, and is a little more complicated...
Extension of a non-negative and symmetric real valued function to a pseudometric
Usually under an extension of a function $f$ defined on a set $X$ (or a pseudometric $d$ defined on $X\times X$) understood a function $\bar f$ defined on a set $Y\subset X$ (resp. a pseudometric $\bar d$ defined on $Y\times Y$), such that a restriction $\bar f|X$ coincides with $f$ (resp. $\bar d|X\times X=f$). I cite (with a correction) the beginning of my student paper “On Extension of (Pseudo-)Metrics from Subgroup of Topological Group onto the Group” “The problem of extensions of functions from subobjects to objects in various categories was considered by many authors. The classic Tietze-Urysohn theorem on extensions of functions from a closed subspace of a topological space and its generalizations belong to the known results. Hausdorff [Hau] showed that every metric from a closed subspace of a metrizable space can be extended onto the space. Isbell [4, Lemma 1.4] showed that every bounded uniformly continuous pseudometric on a subspace of a uniform space can be extended to a bounded uniformly continuous pseudometric on the whole space. The linear operators extending metrics from a closed subspace of a metrizable space onto the space were considered in, e.g., [Bes, Zar]". If we have a symmetric non-negative function $d$ on $X\times X$ such that $d(x,x)=0$ for each $x\in X$, a standard way to modify $d$ to a pseudometric $d’\le d$ is to put $$d’(x,y)=\inf\left\{\sum_{i=1}^{n} d(x_{i-1},x_i):x_1,\dots, x_n\in X, x_0=x, x_n=y\right\}.$$ Remark, that $d’$ may fail to be a metric even when $d(x,y)=0\Rightarrow x=y$ for each $x,y\in X$. References [Bes] Bessaga C., Functional analytic aspects of geometry. Linear extending of metrics and related problems, in: Progress of Functional Analysis, Proc. Peniscola Meeting 1990 on the 60th birthday of Professor M. Valdivia, North-Holland, Amsterdam (1992) 247-257. [Hau] Hausdorff F., Erweiterung einer Homömorpie, - Fund. Math., 16 (1930), 353--360. [Isb] Isbell J.R. On finite-dimensional uniform spaces, - Pacific J. of Math., 9 (1959), 107-121. [Zar] Zarichnyi M., Regular Linear Operators Extending Metrics: a Short Proof, Bull. Pol. Ac.:Math., 44, (1996), 267--269.
Question in real analysis
In the real numbers every Cauchy sequence converges to some real number (this property is essentially equivalent to the completeness of the real numbers). A proof sketch is: A Cauchy sequence is bounded, thus, by the Bolzano-Weierstrass theorem, it has a convergent subsequence (this is the completeness bit actually). For a Cauchy sequence, the limit of a convergent subsequence must actually be the limit of the entire sequence. Now, you probably proved that weak limits are preserved by when passing to the limit. That is, if a sequence $a_n$ converges to $a$, and $a_n\le t$, then $a\le t$. Similarly for inequalities in the other direction. So now, you have a sequence in $[0,1]$ which is Cauchy. So it converges to some point. But by preservation of weak inequalities, the limit point $a$ satisfies $a\in [0,1]$.
Why, my result has the opposite sign to the correct result?
The mistake is when you stated that $(q-5)(q+2)=0$ has solutions $q=-5,q=2$. Instead, you have $$q-5=0\quad\text{and}\quad q+2=0$$ giving $q=5$ and $q=-2$ and so substituting gives $$3=p+5\implies p=-2\implies pq=-2(5)=-10$$ and $$3=p-2\implies p=5\implies pq=5(-2)=-10$$ as desired.
Prove that all the roots of the equation $(1+z)^6+z^6=0$ are collinear.
Suppose $z$ satisfies $(1+z)^6 + z^6 = 0$. Then we have $$|1+z|^6 = |z|^6 \implies |z| = |z+1|$$ It follows that $$z\overline{z} = |z|^2 = |z+1|^2 = (z+1)(\overline{z} + 1) = z\overline{z} + 2 \text{Re}(z) + 1 \implies \text{Re}(z) = \frac{-1}{2}$$ Thus, all roots of the equation $(1+z)^6 + z^6$ lie on the line given by $\text{Re}(z) = -1/2$.
Deriving a formula for the coefficients of a power series.
Hint: The statement of this problem shows some serious flaws. $u(t)=u_0+u_1x+u_2x^2+u_3x^3+u_4x^4+\cdots+u_nx^n$: The left-hand side indicates a function in $t$ while the right-hand side indicates a function in $x$. $d(t)=d+d_1x+d_2x^2+d_3x^3+d_4x^4+\cdots+dx^n$: The same flaw as above. Additionally, the first and last coefficient are $d$ but should presumably be $d\color{blue}{_0}$ and $d\color{blue}{_n}$ which are different values in general. $q_i=\frac{\sum_{k=0}^{i-1}q_kd_{i-k}}{d_0}$: The range of $i$ is missing here. The formula for $q_i$ seems to be incorrect. Usually we would make the approach \begin{align*} q(x)=q_0+q_1x+q_2x^2\cdots+q_nx^n=\frac{u(x)}{d(x)} \end{align*} from which we derive \begin{align*} &u_0+u_1x+u_2x^2+\cdots+u_nx^n\\ &\qquad=(q_0+q_1x+q_2x^2+\cdots+q_nx^n)(d_0+d_1x+d_2x^2+\cdots d_nx^n) \end{align*} We now compare terms with equal powers, analyse \begin{align*} u_0&=q_0d_0\\ u_1&=q_0d_1+q_1d_0\\ u_2&=q_0d_2+q_1d_1+q_2d_0\\ &\vdots\\ u_n&=q_0d_n+q_1d_{n-1}+\cdots+q_nd_0 \end{align*} from which another formula follows.
Show that a set is measurable with respect to Borel product $\sigma$-algebra
$f(x, y) = |x - y|$ is continuous, so the preimage of $(-\infty, 1)$ is open, hence Borel.
Geometrical Application of Complex Numbers
That's great you realized that this inequality represents all the points in the interior and on the circumference of the circle centered at (3,4) with radius 3. If you know the basics I suggest you directly skip to point 2. Understanding the modulus of a complex number If $z=x+iy$, then $|z|=\sqrt{x^2+y^2}$; which effectively represents the distance of the point (x,y) from the origin. When we talk about $|z-z'|$ (where z' is another complex number) we represent the distance of complex number z from z' on the Argand plane. So, the inequality that you encountered represents any variable complex number z that has 3(or less than 3) unit distance from z' (which in your case is 3+4i or (3,4) point on the argand plane), and thus it represents a circle. Coming to the question, draw a rough sketch of a circle(centered (3,4);radius 3) on a paper having relative co-ordinate system and then you would instantly realize that the value of the least modulus(distance from origin) of the variable complex number z lies on the circumference of the circle. It will be lying on the line joining the origin and the center of the circle. And, in this case it is 2.
Approximation of multiplication with derivatives of one multiplier.
So much depends upon the nature of $a(t)$ and $x(t)$ that I do not think a meaningful answer is possible. When asking a very general question such as this, try a simple example. Let $a(t)=t$ and, since we presumably would like lots of derivatives, let $x(t)=\sin(t)$. So we want some fixed values $a_0,a_1,a_2,\cdots$ such that we can say \begin{align} t\sin(t)&\approx a_0\sin(t)+a_1\cos(t)-a_2\sin(t)-a_3\cos(t)+\cdots\\ &=\sin(t)\sum_{k=0}(-1)^ka_{2k}+\cos(t)\sum_{k=0}(-1)^ka_{2k+1} \end{align} So certainly the sums must converge and we find we must approximate an unbounded function by a bounded function. Possibly this could be done over some limited domain for this particular example, but no mention was made of such limits in the question. Complications arising from simple examples should convince one of the necessity of further circumscribing the problem.
Can this extension of fields be transcendental?
Unless I missed something, $R[x]/\mathscr{P}$ is generated as an algebra over $R/\mathfrak{m}$ by the class $\bar x$ of $x$ (because any element of $R[x]$ is a polynomial in $x$ with coefficients in $R$, and its class mod $\mathscr{P}$ is therefore a polynomial in $\bar x$ with coefficients in $R/\mathfrak{m}$). But a field extension which is finitely generated as an algebra is algebraic.
Order Statistics (Sample Median, Range)
HINT Start with thinking about / researching some facts about order statistics of iid uniform RVs. This one may be useful: conditional on the maximum $X(n) = m,$ the remaining order statistics are distributed the same as the order statistics of a sample of $n-1$ independent $U(0,m)$ variables.
possibly simple exercise in algebraic topology regarding Bockstein homomorphism
Note that for any abelian group $A$, there is a natural isomorphism $\operatorname{Ext}^1_\mathbb{Z}(\mathbb{Z}/2\mathbb{Z},A)\cong A/2A$. Indeed, this is just what you get by computing this Ext using the resolution $0\to\mathbb{Z}\stackrel{2}\to\mathbb{Z}\to\mathbb{Z}/2\mathbb{Z}\to 0$ of $\mathbb{Z}/2\mathbb{Z}$. In particular, let's take $A=\ker\mu$ and consider the class $x\in\operatorname{Ext}^1_\mathbb{Z}(\mathbb{Z}/2\mathbb{Z},A)\cong A/2A$ representing the extension $$0\to A \to \Theta^\mathbb{Z}_3\overset{\mu}{\to} \mathbb{Z}/2\mathbb{Z}\to 0.$$ Since $A/2A$ is a vector space over $\mathbb{Z}/2\mathbb{Z}$ and $x$ is nonzero (the SES does not split), there is a homomorphism $A/2A\to\mathbb{Z}/2\mathbb{Z}$ which sends $x$ to $1$. Composing this with a quotient map $A\to A/2A$, we get a homomorphism $f:A\to\mathbb{Z}/2\mathbb{Z}$. This homomorphism has the property that the induced map $\operatorname{Ext}^1_\mathbb{Z}(\mathbb{Z}/2\mathbb{Z},A)\to\operatorname{Ext}^1_\mathbb{Z}(\mathbb{Z}/2\mathbb{Z},\mathbb{Z}/2\mathbb{Z})$ sends $x$ to the nonzero element of $\operatorname{Ext}^1_\mathbb{Z}(\mathbb{Z}/2\mathbb{Z},\mathbb{Z}/2\mathbb{Z})$, which represents the extension $$0\to\mathbb{Z}/2\mathbb{Z}\to \mathbb{Z}/4\mathbb{Z}\to\mathbb{Z}/2\mathbb{Z}\to 0.$$ This implies that there is a homomorphism $g$ filling in the following diagram: $$\require{AMScd} \begin{CD} 0 @>>>A@>{}>> \Theta^\mathbb{Z}_3 @>{\mu}>> \mathbb{Z}/2\mathbb{Z} @>>>0\\ & @V{f}VV @V{g}VV @V{1}VV \\ 0 @>>>\mathbb{Z}/2\mathbb{Z}@>{}>> \mathbb{Z}/4\mathbb{Z} @>>> \mathbb{Z}/2\mathbb{Z} @>>>0\\ \end{CD}$$ (Explicitly, if we choose an element $y\in A$ whose image in $A/2A$ is $x$, then we can identify $\Theta^\mathbb{Z}_3$ with the set $A\times\mathbb{Z}/2\mathbb{Z}$ with the group operation which is coordinatewise except that $(0,1)+(0,1)=(y,0)$. The map $g$ then sends $(a,i)\in A\times\mathbb{Z}/2\mathbb{Z}$ to $2f(a)+i$. This works because we chose $f$ so that $f(y)=1$.) This map of short exact sequences then induces maps between the associated long exact sequences in cohomology. In particular, the induced maps on cohomology commute with the Bocksteins, so the induced map $f_*:H^5(M;A)\to H^5(M;\mathbb{Z}/2\mathbb{Z})$ satisfies $f_*(\delta(c))=Sq^1(c)$. Taking $c=\Delta(M)$ gives your desired result.
Practical significance of $\frac{17}{21}$=$\frac{1}{2}$+$\frac{1}{6}$+$\frac{1}{7}$
The point is just that everyone can get the right amount of grain if I give everyone half a bushel, a sixth of a bushel, and a seventh of a bushel. The article you originally linked goes on to note that the obvious greedy algorithm outputs 1/2 + 1/4 + 1/17 + 1/1428, which is an extremely unhelpful division (who has the ability to divide a bushel into 1428ths?).
Chernoff bound - finding conclusion for fair coin
Thanks to @Did's help, the solution for 3 might look like this: Let $0<\alpha=\mathbb{E}(u)+\epsilon<1$ , where $\mathbb{E}(u)=\frac{1}{2}$ in $\mathbb{P}[u\geq\alpha]\leq(e^{-s\alpha}U(s))^{N}\\\mathbb{P}[u\geq\mathbb{E}(u)+\epsilon]\leq(e^{-s(\mathbb{E}(u)+\epsilon)}U(s))^{N}$ where $U(s)=\frac{1}{2}(1+e^{s})$ Because we are interested in the tightest bound I assumed that $s=\ln(\frac{\alpha}{1-\alpha})$ For the RHS of the inequality skipping the Nth power yielding this $e^{-s(\alpha)}U(s)=\frac{1}{2}e^{-s\alpha}(1+e^{s})=\frac{1}{2}e^{-s\alpha}(1+e^{s})=\frac{1}{2}e^{-ln(\frac{\alpha}{1-\alpha})(\alpha)}(1+e^{\ln(\frac{\alpha}{1-\alpha})})=\frac{1}{2}(\frac{\alpha}{1-\alpha})^{-\alpha}(1+\frac{\alpha}{1-\alpha})\\=\frac{1}{2}\frac{\alpha^{-\alpha}}{(1-\alpha)^{-\alpha}}(\frac{1}{1-\alpha})=\frac{1}{2}\alpha^{-\alpha}(1-\alpha)^{\alpha}(1-\alpha)^{-1}=\frac{1}{2}\alpha^{-\alpha}(1-\alpha)^{\alpha-1}$ Using the above equtation I want $2^{-\beta N}=(e^{-s(\alpha)}U(s))^{N}$, hence $2^{-\beta N}=(e^{-s(\alpha)}U(s))^{N}\\2^{-\beta}=e^{-s(\alpha)}U(s)\\2^{-\beta}=\frac{1}{2}\alpha^{-\alpha}(1-\alpha)^{\alpha-1}\\\log_{2}(-\beta)=\log_{2}(\frac{1}{2}\alpha^{-\alpha}(1-\alpha)^{\alpha-1})\\-\beta=\log_{2}(\frac{1}{2})+\log_{2}(\alpha^{-\alpha})+\log_{2}((1-\alpha)^{\alpha-1})\\-\beta=-1-\alpha\log_{2}(\alpha)+(\alpha-1)\log_{2}(1-\alpha)\\\beta=1+\alpha\log_{2}(\alpha)+(1-\alpha)\log_{2}(1-\alpha)$ Since $\alpha=\mathbb{E}(u)+\epsilon=\frac{1}{2}+\epsilon$ $\beta=1+(\frac{1}{2}+\epsilon)\log_{2}(\frac{1}{2}+\epsilon)+(\frac{1}{2}-\epsilon)\log_{2}(\frac{1}{2}-\epsilon)$ What was to show.
Existence of sequence whose set of subsequential limits is $[0,1].$
Sure this is possible! There are a number of ways to do this, one of them being to use the Farey sequence. We can recursively build our sequence by starting with $x_0=0$ and $x_1=1$ and then go to the next new rational number in the order that they are in the Farey sequence, breaking ties ($1/4$ and $3/4$ get introduced at the same step) by putting the smaller one first. This sequence slowly "refines" rational estimates, and so since $\mathbb{Q}\cap[0,1]$ is dense in $\mathbb{R}\cap[0,1]$, has a subsequence that converges to every real number. This particular example is very important in Continued Fraction theory. Analogously, you can do something similar refining by decimal digit. The first few elements of that sequence are $$0,1,0.1,0.2,0.3,\ldots,0.9,0.01,0.02,\ldots, 0.11,0.12,\ldots0.98,0.99,0.001,\ldots$$ It turns out to be the case that any mapping of $\mathbb{N}$ to $\mathbb{Q}\cap[0,1]$ will do as your sequence. Can you figure out why?
Solving an equation involving a limit for two variables
First of all, since $\lim_{x\to0}\sin2x=0$, the numerator must have limit $0$. Thus $$ \sqrt{ae}-3=0 $$ and therefore $ae=9$. So, after rationalizing, you have $$ \lim_{x\to0}\frac{cx\arccos x} {\sin 2x(\sqrt{9e^{2x}+cx\arccos x}+3e^x)}= \frac{1}{2}\frac{c\frac{\pi}{2}}{\sqrt{9}+3} $$ by pairing the $x$ in the numerator with $\sin2x$ in the denominator.
When the $\angle BPA$ is greatest that $P$ is point on $x$-axis?
Assuming $A$ and $B$ lie on the same side of the $x$-axis. Take a point $C$ on the $x$-axis, and draw the circle going through $A, B$ and $C$. If there are points of the $x$-axis inside this circle, then any of those points will make a larger angle than $\angle ACB$. This is a corollary of the inscribed angle theorem, which says that any point on the circumference of the circle (on the same side of $AB$) gives the same angle. Thus for any point $D$ of the circle below the $x$-axis, any point of the $x$-axis inside the triangle $\triangle ADB$ will give a point on the $x$-axis yielding a larger angle than $\angle ACB$. So what you want is the circle going through $A$ and $B$ which touches the $x$-axis, and the point where this circle touches the $x$-axis is your $P$. If the line $AB$ is parallel to the $x$-axis, then the solution is easy. If not, then find the point $Q$ where $AB$ intersects the $x$-axis. By the power of a point, we have $|PQ|^2 = |AQ|\cdot |BQ|$. This should be enough to locate $P$, except you get two solutions. These two solutions are the two local maxima for the angle $\angle APB$, one on either side of the line $AB$. The one of these which lies "below" the line is the global maximum (unless $AB$ is vertical, in which case the two solutions are, of course, equivalent). In your example, as far as I can see, we have $A = (-6, 8)$ and $B = (4, 3)$, meaning $Q = (10, 0)$. This gives $$ |AQ|\cdot |BQ| = \sqrt{256 + 64}\cdot\sqrt{36 + 9}\\ = \sqrt{14400} = 120 $$ So $P = (10-\sqrt{120}, 0)$ (while the solution $(10+\sqrt{120}, 0)$ is discarded). By the law of cosines, we get $$ \cos\angle APB = \frac{|AP|^2 + |BP|^2 - |AB|^2}{2|AP|\cdot |BP|}\\ = \frac{(\sqrt{120}- 16)^2 + 8^2 + (\sqrt{120} - 6)^2 + 3^2 - 10^2 - 5^2}{2\sqrt{(\sqrt{120}- 16)^2 + 8^2}\sqrt{(\sqrt{120} - 6)^2 + 3^2}}\\ \approx -0.0182 $$ yielding $\angle APB\approx 91.04^\circ$
$S$ be a finite subset of $\mathbb R^3$ such that any 3 vectors in $S$ spans a $2$ dimensional subspace then $S$ also spans $2$ dimensional space
Hint 1 There are two linear independent vectors $s_{1}, s_{2} \in S$. Just take three (distinct) vectors in $S$, and see that two of them must be linearly independent. Hint 2 For each $s \in S$, the subpace generated by $s_{1}, s_{2}, s$ has dimension $2$. Hint 3 $s$ is a linear combination of $s_{1}, s_{2}$.
Decompose nuclear norm
Hint: Let $|M| = \sqrt{M^TM}$. Note that $|A|^2$ and $|B|^2$ commute with $|A|^2\;|B|^2 = 0$. Since there exist polynomials $p,q$ with $|A| = p(A^TA)$ and $|B| = q(B^TB)$, we can conclude that $|A|,|B|$ commute with $|A| \, |B| = 0$. So, we have $$ (|A| + |B|)^2 = |A|^2 + |B|^2 = A^TA + B^TB = |A + B|^2. $$ So, $|A+B| = |A| + |B|$.
Disjoint open sets in $\mathbb{R}^N$
Hints: Every open set must contain a rational point (with all coordinate rational). Take a discrete metric space on some point set.
Any insight about this sequence of numbers?
You are finding the least number $m$ such that $2^m \equiv \pm 1\mod 2n + 1$, where $n$ is the entry in the sequence. See http://oeis.org/A003558.
How to formalize this last step in my proof?
The event $A=\{\sum_{k=1}^n X_k/k \to +\infty\}$ contains the event B = $\{\exists a, \sum_{k=1}^n X_k/k - \sum_{k=1}^n \xi/k \to a\}$. You've shown $P(B)=1$, therefore $P(A)=1$. To show $B\subseteq A$, let $\omega\in B$, so $$\sum_{k=1}^n X_k(\omega)/k - \sum_{k=1}^n \xi/k \to a$$ Now just work deterministically to show that $\omega\in A$. More generally, if $x_n$ and $y_n$ are sequences of real numbers, $y_n\to+\infty$ and $x_n-y_n\to a$, then we must have $x_n\to+\infty$, by applying additivity of limits to $x_n=(x_n-y_n)+y_n$.
Help with two calculus proofs
If you write out the definition of a limit as it applies to $\lim _{n\to \infty}f(Q_{n})=+\infty$, it's something equivalent to For any $M\in \mathbb R$ there exists $n_0\in\mathbb N$ such that for every $n \in \mathbb N$, if $n > n_0$ then $f(Q_n) > M.$ I say "equivalent to" because your textbook might put it in a slightly different form. I would recommend to use the textbook's form rather than mine, but try to apply the same ideas. You don't necessarily need to quote the definition in the proof. You can take some steps to apply it to your particular problem before writing it in the proof, but it helps if you don't skip too many steps. And if you get confused, you can write all the steps on a piece of scrap paper in as much detail as you need to sort them out before writing this part of the proof. So you might choose to let $M = 0$ for your application. I think I might write the result into the proof before going any farther: Then there exists $n_0\in\mathbb N$ such that for every $n \in \mathbb N$, if $n > n_0$ then $f(Q_n) > 0.$ Now you want a particular $n$ as described in the sentence above. An easy way is to let $n_1 = n_0+1,$ and now you know such a number $n_1$ exists (because $n_0$ exists) and you know something about $f(Q_{n_1}).$ Unless your definition of limit has "$n \geq n_0$" where I wrote "$n > n_0$", however, you won't get much use out of $f(Q_{n_0}).$ Also, so far none of this has anything to do with $f$ being continuous. It's simply and purely about the definition of a limit. Next you set up $\alpha:[0,1]\rightarrow\mathbb{R}$, but I think you want $\alpha:[0,1]\rightarrow\mathbb R^2$, since $(1,2)$ is not in $\mathbb R.$ I think later you meant to say $g(1) > 0$ rather than $g(1) > 1.$ A typo? If you've already done Bolzano's theorem, you don't need to repeat its proof. Just state the theorem, show you have certain things corresponding to its premises, and then write the corresponding conclusion. When you get more fluent with proof-writing you won't even have to state the theorem literally, just write something like, "Then by Bolzano's theorem, ... ." For part b), the proof depends on what tools you've already developed. But I would probably look at $(P-Q) \cdot \nabla f(\alpha(t))$ as a function of $t.$ I would like to point out that it can happen in part b) that the only way for $\nabla f$ to be "orthogonal" to $P-Q$ is when $\nabla f(x,y) = 0.$ For example, consider the function $f(x,y) = x^2 - 4$ and the points $P = (2,0)$ and $Q = (-2,0).$
Possible words from given alphabet up-to a maximum length
As JMoravitz says it's more like possible strings from a given set of characters. Is there a more formal, more abstract, terminology for this? There is a variety of terminology: e.g. take a look at this report of a seminar on Combinatorics and Algorithmics of Strings, which starts by explaining that the subject is "Strings (aka sequences or words)". On a quick scan of the titles in the contents page, word seems to be the most popular term. I think that word is almost certainly going to be the most popular term among automata theorists, because they consider these objects normally as elements of languages. The individual elements which are chained to form them ($a$, $b$, $c$ in your example) are commonly called symbols, and the set of symbols is the alphabet, commonly denoted $\Sigma$. The term string seems popular with combinatorialists, judging by a quick search experiment at the Online Encyclopedia of Integer Sequences, where string gets 3862 hits and the first page seems to use it only about sequences of symbols. Word gets 7259 hits, but some of those on the first page are clearly using it in other contexts. (Whether words in a Coxeter group are words in the sense of strings/sequences/words is a question of perspective: at a formal level they clearly are, but the questions which you ask about them will tend to be different). Searching for sequence on that site is obviously not going to cast light on this particular usage... But e.g. one talks about combinatorics on words rather than combinatorics on strings. The number of words of length $i$ over an alphabet of size $l$ is $l^i$: if this isn't "obvious" then prove it by induction on the length. The number of words of lengths $1$ to $n$ is therefore $\sum_{i=1}^n l^i$ as you state. Note that this is a geometric progression, and can be written in closed form as $\frac{l^{n+1}-1}{l-1}-1$ or $\frac{l(l^n - 1)}{l-1}$.
Prove that $\chi_y = \prod\limits_{\sigma \in G} \left (X-\sigma(y) \right ).$
Hint: Firstly remark if $deg(y)=n$, $\mu_y=\chi_y$. To see this, consider the basis $1,y,..,y^{n-1}$ and write the matrix of $T_y$ in this base. Let $p=[L:K(y)]$ remark that $p=|G_x|$ Let $y_1,...,y_p$ a base of $[L:K(y)]$ remark that $y_1,y_1y,...,y_1y^{l-1},..,y_i,y_iy,..,y_i^{l-1},..$ is a basis of the $K$-vector space $L$, compute the matrix of $T_y$ in this basis and its characteristic polynomial.
linear combination of infinitely divisible random variables
I doubt that is true in general. Counter example: Let $W_1$ be a standard normal random variable, and $W_2 = W_1,\ \text{if}\ |W_1|\le 1$ $W_2 = -W_1,\ \text{otherwise}$ Then $W_2$ is also a standard normal r.v., but $W_1+W_2$ has a finite support, and is not constant, therefore not infinitely divisible, according to https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=4&ved=0ahUKEwiowYfs3srKAhXEMyYKHaM1DskQFggzMAM&url=http%3A%2F%2Fweb.abo.fi%2Ffak%2Fmnf%2Fmate%2Fgradschool%2Fsummer_school%2Ftammerfors2011%2Fslides_rosinski.pdf&usg=AFQjCNE_2G83w4nq7gqDM5xvP3pt8c281A&cad=rja
Clarification on a notation in ODE: $u^{(iv)} = 0$
The comment by Nate Eldredge, although stated in a tentative form, is the only plausible interpretation of this notation. One possibility is it's the 4th derivative of $u$. Some people think of $u'$, $u''$, $u'''$ as having Roman numerals I, II, III in the exponent, in which case IV would naturally be the 4th derivative. But without more context it's hard to know for sure. -- Nate Eldredge
Complex numbers and absolute values
The absolute value of a complex number (sometimes called its modulus) is real by construction. If $\psi=\alpha+i\beta$ with $\alpha$ and $\beta$ real, then $|\psi|=\sqrt{\alpha^2+\beta^2}$.
How does Hadamard encoding and decoding using inner product or generator matrix work?
The article you link explicitly states "Then the Hadamard encoding of $x$ is defined as the sequence of all inner products with $x$: $$ \mathrm{Had}(x) = \left(\langle x,y\rangle\right)_{y\in\{0,1\}^k}$$" The encoding is a tuple of $2^k$ values, one for each $y\in\{0,1\]^k$. That's what "$y$ is": for every $y$, you compute the inner product with $x$, and store it in one of the coordinates. Again, the articles states "where the message $x$ is viewed as a row vector." This now makes sense: $x$ is a $1\times k$ matrix (row vector), the punctured matrix $G'$ is a $k\times 2^{k-1}$ matrix, so that $$ xG'$$ is well-defined and gives a $1\times 2^{k-1}$ matrix (row vector).
May Composition of a strictly monotonic and a measurable functions be unmeasurable?
I assume that by "strongly monotonic" you mean "strictly monotonic", i.e. (in the increasing case) $x < y$ implies $g(x) < g(y)$. Yes, there is. Let $M$ be any non-measurable subset of $[0,1]$. There is a strictly monotonic function $g$ from $[0,1]$ into the Cantor set. Then $E = g(M)$ is Lebesgue measurable because it has measure $0$, but $g^{-1}(E) = M$ is non-measurable.
How were the solutions to these differential equations found?
Ansatz This seems to be a case for separation of variables, as I don't see the variable which you differentiate to on the right hand side of the differential equations: $$ y'(x) = f(y) \iff dy = f(y)\,dx \iff \int \!\!\frac{dy}{f(y)} = \int\!\! dx = x + C $$ for some integration constant $C$. Equation 1 This equation features the RHS $f(y) = k \sin y$. Using separation of variables we get $$ G(y) := \int\limits_{y_0}^y \!\!\frac{dY}{\sqrt{k \sin Y}} = x - x_0 $$ For the initial conditions $(x_0, y_0) = (0, 0)$ this formula reduces to $$ G(y) := \int\limits_{0}^y \!\!\frac{dY}{\sqrt{k \sin Y}} = x $$ This will probably need numerical integration to yield a graph / table $x = G(y)$ and numerical inversion to yield $y = G^{-1}(x)$. Connection to the Wolfram Alpha solution $$ y(x) = \frac{1}{2} \left(\pi-4\, \mathrm{am}\left(\frac{1}{2} \left(\sqrt{k} \, x + C\right) \large | 2\right)\right) $$ Asking Wolfram Alpha for an indefinite integral using this query yields: $$ \int \!\!\frac{dt}{\sqrt{\sin t}} = - 2\, F\left(\frac{1}{4}(\pi-2 t) \,\large|\, 2\right) + C \qquad(*) $$ for some constant $C$ and $F(z\,|\,m)$ is the elliptic integral of the first kind $$ F(z \,|\, m) = \int\limits_0^z\!\!\frac{dt}{\sqrt{1 - m \sin^2 t}} $$ Like $G$ above this $F$ is not an elementary function, but defined by an integral function too. It is only easier in the sense that this is a well studied function where you can get numerical implementations for, instead of integrating it numerically yourself. Its inverse is the Jacobi amplitude $\mathrm{am}(w \,|\, m)$ which fulfills $$ w = F(z \,|\, m) \iff \mathrm{am}(w \,|\, m) = z $$ Justification for equation (*) $$ F\left(\frac{1}{4}(\pi-2 t) \,\large|\, 2\right) = \int\limits_0^{\frac{1}{4}(\pi-2 t)}\!\!\frac{du}{\sqrt{1 - 2 \sin^2 u}} = \int\limits_0^{\frac{1}{4}(\pi-2 t)}\!\!\frac{du}{\sqrt{\cos 2u}} $$ because $1 - 2 \sin^2 u = \cos 2u$. With $v = 2u$ this yields $$ F\left(\frac{1}{4}(\pi-2 t) \,\large|\, 2\right) = \frac{1}{2}\int\limits_0^{\frac{\pi}{2} - t}\!\!\frac{dv}{\sqrt{\cos v}} $$ with $w = \frac{\pi}{2} - v$ we get $$ F\left(\frac{1}{4}(\pi-2 t) \,\large|\, 2\right) = -\frac{1}{2}\int\limits_{\frac{\pi}{2}}^t\!\!\frac{dw}{\sqrt{\cos(\frac{\pi}{2} - w)}} = -\frac{1}{2}\int\limits_{\frac{\pi}{2}}^t\!\!\frac{dw}{\sqrt{\sin w}} $$ because $\cos(\frac{\pi}{2} - w) = \sin w$. Equation 2 For the other equation we need to deal with the second derivative $y''$, the idea here is to treat the first derivative $y'$ formally as another independent variable together with $y$ and have one side of the equation with only $y'$ and the other with only $y$: $$ y'' = \frac{dy'}{dx} = \frac{dy'}{dy}\frac{dy}{dx} = \frac{dy'}{dy} y' = k \cos y \iff y' \, dy' = k \cos y \, dy $$ Thus $$ \int\limits_{y'_0}^{y'} \! Y' \, dY' = \int\limits_{y_0}^y \! k \cos Y \, dY \iff $$ $$ (y')^2 - (y'_0)^2 = \int\limits_{y_0}^y \! 2 k \cos Y \, dY = 2 k \sin y - 2 k \sin y_0 $$ With $(y_0, y'_0) = (0, 0)$ this reduces to $$ (y')^2 = 2 k \sin y $$ and yields $$ G(y) := \int\limits_0^y \!\! \frac{dY}{\sqrt{2 k \sin Y}} = x $$
Is the set of states of an infinite dimensional unital $C^*$-algebra non-compact in the norm topology?
We can write each linear functional in $B_1$ as a linear combination of four states, with all coefficients in the closed unit disk. It follows that if $E_{\mathfrak A}$ is compact, so is $B_1$.
Is $Q(u,u)$ (also $\langle u,Av\rangle$) considered a scalar product?
I assume that scalar product means the same thing as inner product in this context. The answer to your question is no. If $A$ is indefinite, then there necessarily exists a vector $v$ such that $Q(v,v) = v^TAv < 0$. In particular, plugging in an eigenvector of $A$ associated with a negative eigenvalue demonstrates this. Similarly, a positive semidefinite $A$ does not induce a scalar product because there exists a non-zero vector $v$ such that $v^TAv = 0$. We could apply an argument similar to yours to deduce that for any invertible matrix $M$ (in fact, any matrix with linearly independent columns will do) is such that the function $Q(u,v) = \langle Mu, Mv \rangle$ defines a scalar product. Note that this can be rewritten as $Q(u,v) = u^T (M^TM)v$. In fact, the following are equivalent: $A$ is positive definite, $Q(u,v) = u^TAv$ is a scalar product, There exists an invertible matrix $M$ such that $A = M^TM$.
Dimension of union of fields over intersection of fields.
What you are calling the union of fields $L_1$ and $L_2$ is called the compositum of $L_1$ and $L_2$, or $L_1 L_2$. Let $F:=L_1\cap L_2$, and suppose that $L_1$ has basis $\{x_1,\ldots,x_{n_1}\}$ over $F$ and that $L_2$ has basis $\{y_1,\ldots,y_{n_2}\}$ over $F$. Then $L_1 L_2$ is spanned over $F$ by the $x_i y_j$'s, so it can have dimension at most $n_1 n_2$ over $F$. However, the dimension can be smaller. For example, let $\omega$ be a primitive cube root of 1, let $L_1:=\mathbb{Q}(\sqrt[3]{2})$, and let $L_2:=\mathbb{Q}(\omega \sqrt[3]{2})$. Then $L_1\cap L_2=\mathbb{Q}$ so $n_1=n_2=3$, but $L_1 L_2=\mathbb{Q}(\omega, \sqrt[3]{2})$ so $[L_1 L_2:L_1\cap L_2]=6<9=n_1 n_2$.
A question about factor ring.
I'll denote $\mathbb{Z_4} = \{\bar0,\bar1,\bar2,\bar3\}$ so $2\mathbb{Z_4} = \{ \bar0, \bar2 \}$. Take an element $a$ in $\mathbb{Z}_4/2 \mathbb{Z}_4$ this element has the form $\bar a + 2\mathbb{Z_4}$ for some $\bar a \in \mathbb{Z_4}$, remember that two elements $a ,b \in \mathbb{Z}_4/2 \mathbb{Z}_4$ are "the same" if $\bar a - \bar b \in 2\mathbb{Z_4}$. Consider $\bar 3, \bar 1 \in \mathbb{Z_4}$ since $\bar 3 - \bar 1 = \bar 2 \in 2\mathbb{Z_4}$ then $ 3 $ and $1$ are the same in the quotient space. You can do similar calculations to see that the only classes in $\mathbb{Z}_4/2 \mathbb{Z}_4$ are the ones of $0$ and $1$.
Make 3 circles intersect in only one point by changing their radius as little as possible
A beginning: Let $(x_i,y_i)$ be the centers of the three given circles, $r_i$ their radii, $s_i$ the envisaged correction of $r_i$, and $(x,y)$ the prospective point of intersection of the corrected circles. Then you want to minimize $$f(s_1,s_2,s_3):=\sum_i s_i^2$$ under the three constraints $$(x-x_i)^2+(y-y_i)^2-(r_i+s_i)^2=0\qquad(1\leq i\leq3)\ .\tag{1}$$You therefore have to set up the Lagrangian $$\Phi(x,y,s_1,s_2,s_3):=f(s_1,s_2,s_3)-\sum_i\lambda_i\bigl((x-x_i)^2+(y-y_i)^2-(r_i+s_i)^2\bigr)$$ and solve the system consisting of $(1)$ and the five equations $${\partial\Phi\over\partial x}=0,\quad {\partial\Phi\over\partial y}=0,\qquad {\partial\Phi\over\partial s_i}=0 \quad(1\leq i\leq3)\ .$$ Good luck!
Expected value, variance, evaluating limits and probability of drawing a ball from the urn
The claim that the white ball is drawn infinitely many times is equivalent to saying that there is no positive integer $N$ such that after $N$ draws, the white ball is never drawn again. In other words, pick a number $N$ as large (but finite) as you please. What is the probability that among draws $N+1, N+2, \ldots$, the white ball is never drawn again? It is the product of the individual probabilities that a black ball is drawn at every subsequent draw; i.e., $$\frac{N}{N+1} \cdot \frac{N+1}{N+2} \cdot \frac{N+2}{N+3} \cdot \ldots = \prod_{n=1}^\infty \frac{N+n-1}{N+n}.$$ And what is the limit of this infinite product? If by $X_n$ you mean the number of times the white ball is drawn in $n$ tries, then your recursion idea is appropriate (but ultimately not necessary). Let $$W_I \sim \operatorname{Bernoulli}(p = 1/i)$$ be equal to $1$ if on draw $i$, the drawn ball is white, and $0$ if black. Then $X_n = \sum_{i=1}^n W_i$, and $$\operatorname{E}[X_n] = \operatorname{E}\left[\sum_{i=1}^{n-1} W_i + W_n\right] = \operatorname{E}[X_{n-1}] + \operatorname{E}[W_n] = \operatorname{E}[X_{n-1}] + \frac{1}{n}.$$ Therefore, $$\operatorname{E}[X_n] = \sum_{i=1}^n \frac{1}{i} = H_n,$$ the $n^{\rm th}$ harmonic number. The variance is computed similarly, except we have to explicitly state that because the individual draws are independent, the variance of the sum equals the sum of the variances: $$\operatorname{Var}[X_n] \overset{\text{ind}}{=} \operatorname{Var}[X_{n-1}] + \operatorname{Var}[W_n].$$ I leave the third part as an exercise.
Choose 3 cards from a deck, if last two are spades what is the chance of first card being a spade?
Hint: This question could be asked in a very strange way that might actually be easier to understand: Knowing that the NEXT TWO cards MUST be spades, what are the odds that this first card is a spade? I should add that JMoravitz's comment is really excellent.
Immersion of a map and regular submanifold
The case where both $a$ and $b$ are equal to zero then $f$ is a constant function. In such a case you have that the image is a point. A point is a $0$ manifold and it's inclusion is an injective immersion which is homeomorphic to it's image so we can consider the case where $b\neq 0$. $f$ will be an immersion in this case. But You have that if $a/b \notin \mathbb{Q}$ then the image will be dense. In such case it can't be a submanifold since given any chart $\varphi:U\rightarrow \mathbb{R}^2$ will send $f(\mathbb{R})\cap U$ to a dense subset of $\mathbb{R}^2$ Which can't be a subspace since it's all of $\mathbb{R}^2$. So one must have $a/b\in\mathbb{Q}$ If $a/b\in\mathbb{Q}$ then the image is homeomorphic to $S^1$. Identifying the torus as being $\mathbb{R}^2/\mathbb{Z}^2$ you have that $f$ can be lifted to a function $\tilde{f}$ to $\mathbb{R}^2$ which will have as image a line with slope $a/b$ passing through the origin. Since it's rational we have $\tilde{f}$ will pass through a point with integer coordinates at $n\cdot b$ where $n\in \mathbb{Z}$. This means that $f$ will have period $b$. By identifying $\mathbb{R}/b\mathbb{Z}\equiv S^1$ one can define a homeomorphism from $S^1$ to the image of $f$ through the quotient map. This map will be an embedding that has the same image of $f$. So the only condition is that either $a/b\in \mathbb{Q}$ or $b/a\in\mathbb{Q}$ or $a=0$ and $b=0$. (There's probably a neater way to write this)
Range of Influence of the Wave Equation?
The initial values at the point on the unit circle closest to $(2,3)$ are the ones that first influence the function value at this point. Due to Evans Book, Chapter 2.4.3, Theorem 6, we know that if $u$ is a solution to the wave equation (with speed $c$ normed to 1 for the moment) and $u(0)\equiv u_t(0)\equiv 0$ on the ball $B(x_0,t_0)$, then $u\equiv 0$ within the cone $$C=\{(x,t)\,|\,0\leq t\leq t_0,|x-x_0|\leq t_0-t\}.$$ This easily extends to the case of constant speed $c$ different to $1$ by simply scaling, so our condition is $u(0)\equiv u_t(0)\equiv 0$ within $B(x_0,ct_0)$. What's therefore left to do is to compute $\operatorname{dist}(x_0,B(0,1))$, with $x_0=(2,3)$: Since you suppose that the support of $u(0),u_t(0)$ is within the unit disc $B(0,1)$, the searched-for $t_0$ is given by $t_0=\operatorname{dist}(x_0,B(0,1))/c$. But this distance is just the Euclidean norm minus the radius of the unit circle, that is $$||x_0||=\sqrt{2^2+3^2}-1=\sqrt{13}-1.$$ This finally leads to $$t_0=\frac{\sqrt{13}-1}{c}.$$ Remark: With your approach, you were on the way to answer a different question, that is: How long do I have to wait while measuring at $x_0$ until I can be sure that my initial data $u(0),u_t(0)$ vanish. You have however a sign error for the farthest point, which is in fact $$x_\text{far}=\left(-\frac{2}{\sqrt{13}},-\frac{3}{\sqrt{13}}\right).$$ I can't follow at all the second calculation, but the time (I call it $t_1$) should be $$t_1=\frac{\sqrt{13}+1}{c},$$ since the distance to $t_1$ is exactly $2$ larger than the distance to $t_0$. I sketched the situation in Wolfram alpha.
system of equations which share only one variable?
Two equations for three unknowns should ring a bell... And indeed one cannot deduce $(x,y,m)$ from the two equations in your post. The most one can do is to write two of them as explicit functions of the third one.
Solving a system of equations for x that contains $x^Tx$ (where x is a vector)
The first equation is equivalent to $2LQ^T x = c$, or $Q^T x = \frac{1}{2L}c$. Since $Q$ is symmetric, this simplifies to $Qx = \frac{1}{2L}c$. Assuming $Q$ is strictly positive definite ($x^T Q x > 0$ for all nonzero $x$), it follows that $Q$ is invertible. Therefore, $x = \frac{1}{2L}Q^{-1}c$ is the unique solution to the first equation. Plugging this solution into the second equation gives us $$\begin{aligned} x^T Q x &= \left(\frac{1}{2L}Q^{-1}c\right)^T Q \left(\frac{1}{2L}Q^{-1}c\right) \\ &= \frac{1}{4L^2}c^T Q^{-1} Q Q^{-1}c \\ &= \frac{1}{4L^2}c^T Q^{-1} c \\ \end{aligned}$$ where we have used the fact that the transpose of $Q^{-1}$ is still $Q^{-1}$, since $Q$ is symmetric. We want to find the value of $L$ which makes this equal to $1$: $$\frac{1}{4L^2}c^T Q^{-1} c = 1$$ Multiplying both sides by $L$ and taking the square root gives us $$L = \pm\frac{1}{2}\sqrt{c^T Q^{-1} c}$$ Note that the square root on the right hand side is a real number since $c^T Q c > 0$ (because $Q$ is positive definite).
Number of even sized partitions of $\{ 0,1,2,3,4,5,6,7\}$
Here are the numbers I came up with to compare any other result with: Divide into 2 parts: $(1,7)$: $\binom 87 =8$ $(2,6)$: $\binom 86 = 28$ $(3,5)$: $\binom 85 = 56$ $(4,4)$: $\binom 84/2 =35$ Divide into 4 parts: $(1,1,1,5)$: $\binom 85 = 56$ $(1,1,2,4)$: $\binom 84 \binom 42= 420$ $(1,1,3,3)$: $\binom 83\binom 53/2 = 280$ $(1,2,2,3)$: $\binom 83\binom 52\binom 32/2 = 840$ $(2,2,2,2)$: $\binom 82\binom 62\binom 42\binom 22/4! = 105$ Divide into 6 parts: $(1,1,1,1,1,3)$: $\binom 83 = 56$ $(1,1,1,1,2,2)$: $\binom 82\binom 62/2 = 210$ Divide into 8 parts: $(1,1,1,1,1,1,1,1)$: $\binom 80 = 1$ Total $2095$ This could probably be written more neatly with multinomial coefficients, but the repeats still need to be divided out.
Let $\sigma \in Aut(K)$ have infinite order and $F = \mathcal{F}(\sigma)$. Show that if $K/F$ is algebraic, then $K$ is normal over $F$.
Let $P$ be an irreducible polynomial $\in F[X]$ which has a root $x\in K$ and whose higher coefficient is $1$, we denote by $x,\sigma(x),...,\sigma^n(x)$ the orbit of $x$, remark that $\sigma^i(x)\in K$ since $\sigma\in Aut(X)$, write $Q=(X-x)(X-\sigma(x))..(X-\sigma^n(x))$, we have $Q^{\sigma}=Q$ implies that $Q\in F[X]$, we deduce that $Q$ divides $P$ since $\sigma^i(x)$ is a root of $P$ since $P^{\sigma}=P$, since $P$ is irreducible, $P=Q$.
How many subsets of a set $S$ of size $37$ contain $x$, but not $y$, where $x,y$ are distinct?
There is an obvious bijection between the powerset of $S\setminus\{x,y\}$ and the set of sets you want to count.
Compute multiple integral of function $\frac{xy}{2}$ within a domain D that is area formed by following curves: $L_1: x=0, L_2: x^2+y^2=4, L_3:y=-x$
The domain is a portion of the circle centered at the origin with radius equal to $2$ and therefore the set up should be in polar coordinates $$\int_{\theta_1}^{\theta_2}\int_0^2\frac{\cos \theta \sin \theta}{2} r^3drd\theta=$$ where the limit for $\theta$ depends upon which part we are considering for the domain.
Limit $\lim_{t\to\infty}\frac{10t^3+3t-18}{6-kt^3} = 2$
$$ \lim_{t\to\infty} \frac {10t^3+3t-18}{6-kt^3} = \lim_{t\to\infty} \frac {10 \left( 1+\frac 3{10t^2}-\frac 9{5t^3}\right )}{-k \left( 1-\frac 6{t^3}\right )} = -\frac {10}k $$ Now, just find $k$ from $$ -\frac {10}k = 2 \implies k = -5 $$
On the confinement to $[0;1]$ of the solution of $dX_{t}=(1-X_{t})X_{t}dB_{t}$
Assume that $X_0$ is in $(0,1)$, let $T=\inf\{t\gt0\mid X_t=0\ \text{or}\ X_t=1\}$, thus $X_t$ is in $(0,1)$ for every $t\lt T$. Consider, for every $t\lt T$, $$ Y_t=\log\left(\frac{X_t}{1-X_t}\right). $$ Then $T$ is also $T=\inf\{t\gt0\mid Y_t=\pm\infty\}$ and Itô's formula shows that, for every $t\lt T$, $$ Y_t=Y_0+W_t+\frac12\int_0^tZ_s\mathrm ds,\qquad Z_t=\frac{\mathrm e^{Y_t}-1}{\mathrm e^{Y_t}+1}. $$ One sees that $|Z_t|\lt1$ for every $t\lt T$, in particular, $$ |Y_t-Y_0-W_t|\leqslant\frac12t, $$ for every $t\lt T$. This proves that $|Y_t|$ is finite for every $t$ and that $T=+\infty$ almost surely, that is, $X_t$ is in $(0,1)$ almost surely for every $t$. The same technique applies to the strong solution of $$\mathrm dX_t=C(X_t)\mathrm dW_t,$$ starting from $X_0$ in $(a,b)$, for every function $C:[a,b]\to\mathbb R$ positive on $(a,b)$, zero at $a$ and at $b$, and with bounded derivative on $[a,b]$. Then $(X_t)$ stays in $(a,b)$ forever, almost surely. To show this, one considers the function $D$ defined on $(a,b)$, for some $c$ in $(a,b)$, by $$ D(x)=\int_c^x\frac{\mathrm dz}{C(z)}. $$ Then $$ D(X_t)=D(X_0)+W_t-\frac12\int_0^tC'(X_s)\mathrm ds, $$ thus, $$ |D(X_t)-D(X_0)-W_t|\leqslant\frac12t\,\|C'\|_\infty, $$ for every $t$, and, since $\lim\limits_{x\to b^-}D(x)=+\infty$ and $\lim\limits_{x\to a^+}D(x)=-\infty$, the same reasoning applies.
Question about continuity of function with two variable
Yes. A function is continuous at a point $\mathbf{x}_0$ iff $\lim_{\mathbf{x} \rightarrow \mathbf{x}_0} f(\mathbf{x}) = f(\mathbf{x}_0)$.
Definite Integral for $x^{-n}$
If $n \neq 1$ then $$\int x^{-n}dx = \frac{x^{-n+1}}{-n+1} + C$$ If $n=1$, then $$\int x^{-1}dx = \ln(x) + C$$ For example you want to find area under the curve from $x=1$ to $x=5$ for $x^{-3}$. Then you have $$\int_1^5 x^{-3}dx = \frac{x^{-3+1}}{-3+1}\bigg \vert_1^5 = \frac{x^{-2}}{-2}\bigg\vert_1^5 = \frac{-5^{-2}}{2} - \frac{-1}{2} = \frac{1}{2}-\frac{1}{50} $$ Your problem was that you were not realizing that if $x > y$ then $1/x$ < $1/y$. This was what was giving you the illusion of negative area. See how the terms get inverted while calculating the area.
Montel's theorem for real sequence of analytical functions
No. For example, say $f_n(t)=\sin(nt)$.
Absolute Value and Exponents
It is not correct that $|x^a| = -x^a$ when $x< 0$; only when $x^a < 0$.
Any product of two transpositions is in a normal subgroup $H$ of $S_n$.
This proof is by @ViníciusNovelli and is given in the comments. Since $A_n\lhd S_n$ and $\operatorname{sgn}:S_n\to \{1,-1\}$ with $\ker(\operatorname{sgn})=A_n$, the sign of two transpositions is $1$, so their product must be in $A_n$. So let $H=A_n$.
How many numbers $k$ of $200 \choose k$ are divisible by $3$? $k \in \{0,1,2,\cdots 200\}$
According to Lucas' theorem, a binomial coefficient $\binom{m}{n}$ is divisible by a prime p if and only if at least one of the base p digits of n is greater than the corresponding digit of m.
Confusion about statistics
The statement of the test is a bit confusing because the authors change the sign on the critical value. Your doing a left tailed test and the critical value is given as -z_q. Then the authors switch to giving the formula for q in terms of z_q instead of -z_q. Then they give the formula for p-value in terms of phi, which conventionally refers to the cumulative distribution, i.e. phi(x)=P(N(0,1) < x). To answer your question, you don't want lower.tail=FALSE in the R code since you want to do a left tailed test.
need a 3-group with 3 generators
I think the issue is your formulation and test of "with three generators". Presumably you mean that the group can be generated by 3 elements (but not by 2). Of course you could generate such a group with more elements. The generating set stored by GAP and returned by GeneratorsOfGroup is for solvable groups a polycyclic generating set, which for a group of order $p^n$ has always $n$ generators. Thus your test eliminates all groups. If you replace Length(GeneratorsOfGroup(G)) by Length(MinimalGeneratingSet(G)) you will find the groups you're interested in (starting in order $3^5$ as noted by Alexander Konovalov above.)
Chain rule for two variable functions PROOF
Theorem Suppose that $x=g(t)$ and $y=h(t)$ are differentiable functions of $t$ and $z=f(x,y)$ is a differentiable function of $x$ and $y$. Then $z=f(x(t),y(t))$ is a differentiable function of $t$ and $$\frac{\mathrm{d} z}{\mathrm{d} t} = \frac{\partial z}{\partial x}\cdot\frac{\mathrm{d} x}{\mathrm{d} t}+\frac{\partial z}{\partial y}\cdot\frac{\mathrm{d} y}{\mathrm{d} t},$$ where the ordinary derivatives are evaluated at $t$ and the partial derivatives are evaluated at $(x,y)$. Proof. Suppose that $f$ is differentiable at the point $P(x_0,y_0)$, where $x_0=g(t_0)$ and $y_0=h(t_0)$ for a fixed value of $t_0$. We wish to prove that $z=f(x(t),y(t))$ is differentiable at $t=t_0$ our equality holds at that point as well. Since $f$ is differentiable at $P$ , we know that $$z(t)=f(x,y)=f(x_0,y_0)+f_x(x_0,y_0)(x−x_0)+f_y(x_0,y_0)(y−y_0)+E(x,y),$$ where $$\lim_{(x,y)\to(x_0,y_0)}\frac{E(x,y)}{\sqrt{(x−x_0)^2+(y−y_0)^2}}=0$$ We then subtract $z_0=f(x_0,y_0)$ from both sides of this equation: $$z(t)-z(t_0)=f_x(x_0,y_0)(x(t)−x_0(t))+f_y(x_0,y_0)(y(t)−y_0(t))+E(x(t),y(t)).$$ Next, we divide both sides by $t−t0$ and take the limit as $t$ approaches $t_0$: $$\lim_{t\to t_0}\frac{z(t)-z(t_0)}{t-t_0}=f_x(x_0,y_0)\lim_{t\to t_0}\frac{x(t)-x(t_0)}{t-t_0}+f_y(x_0,y_0)\lim_{t\to t_0}\frac{y(t)-y(t_0)}{t-t_0}+\lim_{t\to t_0}\frac{E(x(t),y(t))}{t-t_0}$$ Work in progress.
Prove that the relation $x^n + y^n = z^n$ does not hold for $n \geq z$
Since we assume that $x,y,z$ are positive integers, we have $x^n + y^n > y^n$, and since $a \mapsto a^n$ is monotonic, it follows that $x^n + y^n > z^n$ if $y \geqslant z$. By the same reasoning we can exclude $x \geqslant z$. For $x,y < z$, we then have $x^n + y^n \leqslant 2(z-1)^n$, and then showing $2(z-1)^n < z^n$ finishes the proof. Since by assumption $n \geqslant z$, we have $$\frac{(z-1)^n}{z^n} = \biggl(1 - \frac{1}{z}\biggr)^n \leqslant \biggl( 1 - \frac{1}{z}\biggr)^z,$$ and it is easily shown if not already known that $$\biggl(1 - \frac{1}{z}\biggr)^z < e^{-1}$$ for all positive integers $z$. Hence $$\frac{2(z-1)^n}{z^n} < \frac{2}{e} < 1.$$
Why is the set $S = \{ (x,y,z) \in \mathbb{N}^3 : x^2 + 4yz = p, p \text{ prime} \}$, finite
Note that $x \in \{0,1,2,\ldots,\sqrt{p}\}$, $y \in \left\{1,2,\ldots,\dfrac{p-x^2}4 \right\}$ and $z \in \left\{1,2,\ldots,\dfrac{p-x^2}{4y}\right\}$. Hence, clearly we have $$\# S \leq \sqrt{p} \times \left(\dfrac{p}4 \right)^2 = \dfrac{p^{5/2}}{16}$$
Prove that event will happen an infinite number of times
This is simple application of Borel Cantelli Lemma. If $\sum P(X_n \leq a_n)=\infty$ then $X_n \leq a_n$ holds infinitely often with probability $1$. So you only have to check that $\sum [1-e^{-(\ln n+ \ln \ln n +\ln^{2} \ln \ln n)}]=\infty$. Can you check this? [The general term of this series does not tend to $0$].
Diffusion process with "centering" drift
So first off you must work with the Poisson process with jump rate $p$ instead of directly with $C_t$. This is a reasonable assumption for large time, by basically the same logic as the Poisson approximation to the binomial. I'll call that Poisson process $C_t$ from here out. The main idea is that the Poisson process $C_t$ itself cannot be described by a diffusion process directly at all. One must instead work with a process with at least small drift if not zero drift, because just $C_t$ when converted to a continuum limit in a naive way would either be deterministic (because $O(t/dt)$ steps of magnitude $dt$ occur and the law of large numbers kicks in) or diverge instantly (because $O(t/dt)$ steps of magnitude $(dt)^{1/2}$ occur and they tend not to cancel out because of the drift). In this context it makes sense to achieve the approximation by defining a time scale $T \gg 1$ and examining $c_\tau=C_{T \tau}=T \mu_\tau + T^{1/2} \sigma_\tau$ where $\mu_\tau$ is deterministic and $\sigma_\tau$ is random. Neither can depend explicitly on $T$ (if they do then our scaling was wrong). This is a kind of analogue of van Kampen system size expansion where the large parameter is forced in by stretching out the time scale. (Note that this focus on large time is necessary, there is no way that the short time dynamics can be resolved by a diffusion process in any reasonable sense.) In this case $\mu_\tau=p\tau$ necessarily, and then you are left to perform Taylor expansion on the master equation for the evolution of $\sigma_\tau=T^{-1/2} \left ( c_\tau-T\mu_\tau \right )=T^{-1/2} c_\tau - T^{1/2} p \tau$ in powers of the small parameter $T^{-1/2}$ in order to isolate the "microscopic drift" (if there is any) and the diffusion inside of $c_\tau$. It looks like you get that the first moment is just zero while the second moment is $T^{-1}$ times the second moment of the increment distribution of $c_\tau$ which is $pT$. So for $T \gg 1$ and $t=T\tau$ where $\tau$ is of order $1$, we have that the PDF of $\sigma_\tau$ asymptotically exists and evolves as $$\frac{\partial f}{\partial \tau}=\frac{p}{2} \frac{\partial^2 f}{\partial x^2}.$$ Consequently the overall process $C_t$ behaves like $pt+\sqrt{p} B_t$ where $B_t$ is a Brownian motion, for large time. Remarks: This implies that the one-dimensional distribution of $C_t$ is asymptotically $N(pt,pt)$ distributed. This is consistent with what we know for CLT for the Poisson distribution. It's not exactly consistent with what we know for CLT for the binomial (which would tell us the variance should be $pqt$ not $pt$), but that error was already committed in passing to the Poisson process approximation in the first place. You could do the procedure over by writing $C_t=t-(t-C_t)$ and then starting from scratch with that. This will be more accurate than what I wrote here if $p>1/2$. (This is the analogue of the Poisson approximation to the binomial in the $p \to 1^-$ limit.) Keep in mind that it does not make sense to say that the one dimensional distribution of $C_t$ converges to $N(pt,pt)$ as $t \to \infty$; this is the same situation as with CLT, if you want convergence you need to shift and rescale. My main source here is Gardiner Handbook of Stochastic Methods Chapter 7.
Inequality with bounded functions
I think it depends on the values of $A$ and $B$. The numerator evaluates to $8(A-B)(A^2-1)+2AB(B-1)$. Using the fact that both $A$ and $B$ are $\geq 1$, the numerator can only be negative when $B>A$. My rough algebraic manipulation seems to suggest that the numerator is positive when $B>4A+1$. When $A<B<4A+1$, however, it is possible for the numerator to be negative. I tried $B=2A$ and found that the expression will be negative in that case if $A>2$.
Conguences with power
Yes, $\bmod 29\!:\ \color{#0a0}{\dfrac{9}5}\equiv \dfrac{54}{30}\equiv \dfrac{(\color{#c00}{-1})4}1\equiv \color{#c00}{2^{\large 14}}2^{\large 2}\equiv \color{#0a0}{2^{\large 16}},\,$ so with $\,x = 2^{\large n}\,$ we have $ x^{\large 44}\!\equiv \color{#0a0}{\dfrac{9}5}\!\!\iff\!\!2^{\large 44n}\!\equiv \color{#0a0}{2^{\large 16}}\!\!\!\iff\! \bmod 28\!:\, \underbrace{44n}_{\textstyle 16n}\equiv 16\!\!\!\underset{\large \div\ 4\!\!}\iff\! \bmod 7\!:\ 4n\equiv 4\!\!\iff\! n\equiv 1$
Does the norm of the matrix inverse alone say anything about the condition number
The condition number is scaling invariant. That is, for each non-singular $A$ and $\alpha > 0$ we have $$ \operatorname{cond} (\alpha A) = \alpha\alpha^{-1} \| A\| \|A^{-1}\| = \operatorname{cond}(A). $$ So, by multiplication with a scalar you can make the norm of the inverse arbitrarily large (or small) without changing the condition number. However, if you fix $\|A\|$, say for example $=1$, then $\operatorname{cond}(A)$ is trivially proportional to $\|A^{-1}\|$.
Is the sequence bounded?
Let us reformulate the question into: Is the series with positive terms: $$\sum_{k=2}^{\infty}a_k \ \text{with} \ a_k:=log_{k!}\left(\frac{k+1}{2}\right)$$ a convergent series (the $x_n$ appear as the partial sums of this series) ? First proof : Using Stirling approximation (https://en.wikipedia.org/wiki/Stirling%27s_approximation), $$a_k=\dfrac{\ln(\tfrac{k+1}{2})}{\ln(k!)}\approx\dfrac{\ln(k+1)-\ln(2)}{k \ln(k) - k}\approx\dfrac{1}{k}$$ Being equivalent to the general term of the harmonic series, this series is divergent. Said otherwise, as its terms $a_k$ are positive, its partial sums tend to $+\infty$, thus are not bounded. 2nd proof: Lemma 1: $\forall k \geq 2, k \in \mathbb{N}, \ \ \dfrac{k+1}{2} \geq \sqrt{k}.$ (easy) Consequence : taking the natural logarithm on both sides : $\tfrac{\ln(k+1)-\ln 2}{\ln(k)} \geq \tfrac12.$ Lemma 2: $\forall k\geq 2, k \in \mathbb{N}, \ \ \ln(k!) \leq k \ln k.$ (even more easy) Then, using Lemma 2 and the consequence of Lemma 1: $a_k=\dfrac{\ln(\tfrac{k+1}{2})}{\ln(k!)} \geq \left(\tfrac{\ln(k+1)-\ln 2}{\ln(k)}\right)\frac{1}{k} \geq \frac{1}{2}\frac{1}{k}$ The general term being less than a constant times the general term of the harmonic series, the given series is divergent.
The differential equation $\frac{dy}{dx} +y^2 + \frac{x}{1-x}y = \frac{1}{1-x}$
Make the change of variables $y=1+\frac{1}{u(x)}$, the equation then becomes linear: $$u'=\frac{x-2}{x-1}u+1.$$ I think you can take it from here.
How important is the own talent for research of your PhD supervisor?
I have a PhD and have served as direct advisor for a few PhD students and on committees for several, though in physics and electrical engineering and computer science, not mathematics. A few thoughts: First, look deep into yourself and review your career so far and understand your strengths and weaknesses, and choose an advisor accordingly. Some of my friends and colleagues in math are superb math problem SOLVERS, but not question POSERS. If you're such a mathematician, choose an advisor who can help you identify great problems, and also teach you how to do so yourself (essential for a productive career). Next, choose someone with whom you can work productively. He or she need not be a close friend, or agree on politics or religion or sports teams or whatever, but when you spend an hour in his or her office, you should feel like you're making progress, that the advisor hears your challenges and helps you along. Most good students doing world-class work hit stumbling blocks, and you need to know that your advisor can help you. (He or she won't be travelling the world or is such a recluse or has so many students that it will take weeks to seem him or her, etc.) Understand within yourself whether you know precisely the field and class of problems you'd like to address (topology, number theory, differential equations, ...) and choose accordingly. But if you're still unsure, try to find an advisor who will support your exploration throughout a range of fields. All other things being equal, a more famous, better connected advisor may help you land a job, but notice who is a rising star and will be making a name for him or herself in the three or four years it will take you to finish. Being the first advisee of such a rising star can be a great help when you graduate, especially if potential employers want to move into the field pioneered by your advisor. (My own boss was the first PhD student of just such a rising star at a major university, and their joint work has been cited many thousands of times.) Importantly: talk honestly to current PhD students and graduates of a candidate advisor, and see where they've found jobs, whether they would work again with that candidate advisor. Most importantly: talk to candidate advisors and ask about their advising methods. You can get this person's views on previous students and compare these views to those from the students and former students with whom you've corresponded. Good luck!
Measure theory; proving an infinite partition exists from a sigma-algebra.
Your idea is correct. If the space is finite then the sigma algebra if finite and your partition problem is easy enough. If the sigma algebra is infinite then it is uncountable. Let $\Bbb{A}=\{A_i:i \in I\}$ Then you can take a countable sequence $\{A_n:n\in \Bbb{N}\} \subset \Bbb{A}$ Now define $B_n=A_n \setminus \bigcup_{k=1}^{n-1}A_k$ and $B_1=A_1$ Note that these sets are disjoint and belong to $\Bbb{A}$(since $\Bbb{A}$ is closed under set differences and unions) Finally take $\Bbb{A}_n:=\{B_n\}$ and $\Bbb{A}_0=\Bbb{A} \setminus \{B_n: n \in \Bbb{N}\}$ So you have a partition $\{ \Bbb{A}_n:n=0,1,2...\}$ of the sigma algebra.
What is the residue of $z^2 \cos(\frac{1}{z})$
The residue is the coefficient of the term $\frac{1}{z}$ in the Laurent Series. The Maclaurin Series of $\cos z$ is $$\cos z = 1 - \frac{z^2}{2!} + \frac{z^4}{4!} - \frac{z^6}{6!} + \cdots $$ It follows that $$\cos\left(\frac{1}{z}\right) = 1 - \frac{1}{2!}\frac{1}{z^2} + \frac{1}{4!}\frac{1}{z^4} - \frac{1}{6!}\frac{1}{z^6}+\cdots $$ Finally, multiplying both sides by $z^2$, we get $$z^2\cos\left(\frac{1}{z}\right) = z^2 - \frac{1}{2!} + \frac{1}{4!}\frac{1}{z^2}- \frac{1}{6!}\frac{1}{z^4}+\cdots $$ The coefficient of $\frac{1}{z}$ is $0$, so the residue of $z^2\cos\left(\frac{1}{z}\right)$ is $0$.
Prove the intersection of a set complement equals the complement of the union of a set.
In your proof, you don't use De Morgan's law; in fact, the statement you want to proof is called De Morgan's law. However, to prove that the two sets are equal you have to show that \begin{equation} \left(\bigcap_{i \in I} A_i \right)^{c} \subset \bigcup_{i \in I} \big( A_i \big)^c \qquad \text{ and } \qquad\bigcup_{i \in I} \big( A_i \big)^c \subset \left( \bigcap_{i \in I} A_i \right)^c. \end{equation} Let me show you the first inclusion. An element $x$ of the set $\big( \bigcap_{i \in I} A_i \big)^c$ is no element of the set $\bigcap_{i \in I} A_i$. Consequently, there is at least one set $A_i$ such that $x \notin A_i$. That is, $x \in (A_i)^c$. Hence, $x$ is an element of $\bigcup_{i \in I} (A_i)^c$. The proof of the second inclusion is similar.
Generating correlated random numbers: Why does Cholesky decomposition work?
The co-variance matrix of any random vector $Y$ is given as $\mathbb{E} \left(YY^T \right)$, where $Y$ is a random column vector of size $n \times 1$. Now take a random vector, $X$, consisting of uncorrelated random variables with each random variable, $X_i$, having zero mean and unit variance $1$. Since $X_i$'s are uncorrelated random variables with zero mean and unit variance, we have $\mathbb{E} \left(X_iX_j\right) = \delta_{ij}$. Hence, $$\mathbb{E} \left( X X^T \right) = I$$ To generate a random vector with a given covariance matrix $Q$, look at the Cholesky decomposition of $Q$ i.e. $Q = LL^T$. Note that it is possible to obtain a Cholesky decomposition of $Q$ since by definition the co-variance matrix $Q$ is symmetric and positive definite. Now look at the random vector $Z = LX$. We have $$\mathbb{E} \left(ZZ^T\right) = \mathbb{E} \left((LX)(LX)^T \right) = \underbrace{\mathbb{E} \left(LX X^T L^T\right) = L \mathbb{E} \left(XX^T \right) L^T}_{\text{ Since expectation is a linear operator}} = LIL^T = LL^T = Q$$ Hence, the random vector $Z$ has the desired co-variance matrix, $Q$.