title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Does every element of tensor product look like this?
I think I got it: the upper fact implies that there exist linearly independent sets $\{v_1,...,v_n\}\in V$ and $\{w_1,...w_m\}\in W$ and $\alpha_{ij},\,i=1,...,m,\,j=1,...,n$, such that $$a=\sum_{i=1}^m\sum_{j=1}^n \alpha_{ij}v_i\otimes w_j=\sum_{i=1}^mv_i\otimes \sum_{j=1}^n \alpha_{ij}w_j.$$ If vectors $w_i'=\sum_{j=1}^n \alpha_{ij}w_j$ are linearly independent, we have what we need. If not, Let $(w_k'')_{k=1}^p$ be a basis for $span\{w_1',...w_n'\}$. Then $p<n$ and we have, for some $\beta_{ik}$, $$a=\sum_{i=1}^m v_i\otimes \sum_{k=1}^p \beta_{ik} w_k''=\sum_{k=1}^p((\sum_{i=1}^m\beta_{ik} v_i)\otimes w_k'').$$ Now, if $v_i'=\sum_{i=1}^m\beta_{ik} v_i$ are linearly independent, the end. If not, we take the basis for $span\{v_1',...,v_m'\}$ (whose dimension is $<m$) and continue like before. Since in every step of this algorithm the dimension of our spaces gets smaller, we have to get to linearly independent sets on both sides in finite number of steps. The end.
How do I prove that a constant $C$ exists that matches these bounds?
The assertion is wrong. Try $T(2n)=n^3$ and $T(2n+1)=3n^4$.
Question about proof of Leibniz's integral rule.
Yes, you can apply the Fundamental Theorem of Calculus to compute$$\frac\partial{\partial x_1}\int_{x_1}^{x_2}f(x,x_3)\,\mathrm dx,\tag1$$since here you can treat $x_2$ and $x_3$ as if they where constants; you will get that $(1)$ is equal to $-f(x_1,x_3)$. And you will also get that$$\frac\partial{\partial x_2}\int_{x_1}^{x_2}f(x,x_3)\,\mathrm dx=f(x_2,x_3).$$
Uniformly bounded solution to a dynamical system.
Since: $$\frac{1}{2}\frac{d}{dt}(x^2+y^2)=y\dot{y}+x\dot{x} = 15x^2+x^3-3x^4+4y^3-5y^4$$ we know that $$\frac{(x^2+y^2)'}{(x^2+y^2)^2}$$ is negative and bounded for any $(x,y)$ such that $x^2+y^2$ is big enough - many thanks to Ivan. But any function that satisfies: $$ -J\cdot f^2(t)\leq f'(t) \leq -K\cdot f^2(t),\quad J,K>0 \tag{2}$$ is bounded for $t\to +\infty$, since the solution of $(2)$ with an equality sign is a function that decays like $\frac{1}{t}$ for $t>0$.
non-trivial result about integrals due Gromov
Thanks to Mariano Suárez-Alvarez for pointing out a bad assumption I made in my previous attempt For all $u\le v$, in $[a,b]$ we have $$ \frac{f(u)}{g(u)}\ge\frac{f(v)}{g(v)} $$ Assuming that $g$ is either non-negative or non-positive on all of [a,b], we get $$ f(u)g(v)\ge f(v)g(u) $$ Let $r\le s$. Then, integrating in $u$ from $a$ to $r$ and then in $v$ from $r$ to $s$, we get $$ \int_a^rf(u)\mathrm{d}u\;\int_r^sg(v)\mathrm{d}v\ge\int_a^rg(u)\mathrm{d}u\;\int_r^sf(v)\mathrm{d}v $$ Then we have $$ \begin{align} &\frac{\int_a^rf(t)\mathrm{d}t}{\int_a^rg(t)\mathrm{d}t}-\frac{\int_a^sf(t)\mathrm{d}t}{\int_a^sg(t)\mathrm{d}t}\\ &=\frac{\int_a^rf(t)\mathrm{d}t\;\int_a^sg(t)\mathrm{d}t-\int_a^rg(t)\mathrm{d}t\;\int_a^sf(t)\mathrm{d}t}{\int_a^rg(t)\mathrm{d}t\;\int_a^sg(t)\mathrm{d}t}\\ &=\frac{\int_a^rf(t)\mathrm{d}t\;(\int_a^rg(t)\mathrm{d}t+\int_r^sg(t)\mathrm{d}t)-\int_a^rg(t)\mathrm{d}t\;(\int_a^rf(t)\mathrm{d}t+\int_r^sf(t)\mathrm{d}t)}{\int_a^rg(t)\mathrm{d}t\;\int_a^sg(t)\mathrm{d}t}\\ &=\frac{\int_a^rf(t)\mathrm{d}t\;\int_r^sg(t)\mathrm{d}t-\int_a^rg(t)\mathrm{d}t\;\int_r^sf(t)\mathrm{d}t}{\int_a^rg(t)\mathrm{d}t\;\int_a^sg(t)\mathrm{d}t}\\ &\ge0 \end{align} $$ Update: The requirement that $g$ stay either non-negative or non-positive is reasonable since the result is false for $f(t)=1-t$ and $g(t)=1-t^2$ on $[0,\frac{3}{2}]$. Here is the graph of $\frac{\int_0^x(1-t)\;\mathrm{d}t}{\int_0^x(1-t^2)\;\mathrm{d}t}$:
Radius of convergence of $\sum a_nz^{n^2}$ if $\sum a_nz^n$ has radius of convergence $2$
(2017/8/6) "Amusing" silent downvote, surely for mathematical reasons... :-) Nearly every question on the site asking for the radius of convergence of some entire series could make use of the following characterization: The radius of convergence of the series $\sum a_nz^n$ is the unique number $0\leqslant R\leqslant +\infty$ such that: i. For every $|z|<R$, $|a_nz^n|\to0$. ii. For every $|z|>R$, the sequence $|a_nz^n|$ is unbounded. As a consequence: iii. If $|a_nz^n|\to0$ then $|z|\leqslant R$. iv. If the sequence $|a_nz^n|$ is unbounded then $R$ is finite and $|z|\geqslant R$. Let us apply these to compute $R$ the radius of convergence of $\sum a_nz^{n^2}$, assuming the radius of convergence of $\sum a_nz^n$ is $2$. First, the radius of convergence of $\sum a_nz^n$ is $2$ hence, by i., $|a_nz^n|\to0$ for every $|z|<2$. In particular, $|a_n|\to0$ (take $z=1$), hence, by iii., $R\geqslant1$. Secondly, for every $|z|>1$, $|z|^{n^2-n}2^{-n}\to\infty$ hence $|a_nz^{n^2}|\geqslant|a_n|\,|2z|^n$ for every $n$ large enough. Since $|2z|>2$, by ii., $|a_nz^{n^2}|$ is unbounded. In particular $|a_nz^{n^2}|$ does not converge to $0$, hence the series $\sum a_nz^{n^2}$ diverges, that is, by iv., $R\leqslant1$. Thus, $R=\_\_$.
Finding the Accuracy of a Taylor Polynomial for the Approximation $f(x) \approx T_{n}(x)$
The remainder is $R_n=\frac{f^{(n+1)}(c)}{(n+1)!}(x-a)^{n+1}$ for some $c$ between $x$ and $a$. In this case, the derivatives of $f(x)$ are all $\pm\sin x$ or $\pm \cos x$, so $|f^{(n+1)}(c)|\leq 1$. Thus $|R_n|\leq \frac{|x-\frac{\pi}{6}|^{n+1}}{(n+1)!}$. You can plug in $n=4$ to finish the problem. Edit: The preceding argument was for all of $\mathbb R$, but the problem is for the specific interval $[0,\pi/3]$, and the OP wants an absolute numerical bound. Restricting to $[0,\pi/3]$ does not let us improve our bound on the fifth derivative $f^{(5)}(x)=\cos(x)$ since $\cos(x)$ achieves the value $1$ on that interval. So $$R_4\leq \frac{1}{5!}|x-\pi/6|^5.$$ To get an absolute numerical bound, notice that $|x-\pi/6|\leq \pi/6$ on the given interval.
The roots of a quadratic with a mistaken constant term are $-17$ and $15$; with a mistaken leading coefficient, $8$ and $-3$. Find the actual roots.
HINT From the givens we know that Loki solved $$a(x+17)(x-15)=0 \implies a(x^2+2x-17\cdot 15)=0$$ Unloki solved $$a(x-8)(x+3)=0\implies a(x^2-5x-8\cdot 3)=0$$
Stuck on AP using binomial expansion
$$\binom n{r-1}-2\binom nr+\binom n{r+1}=0$$ $$\frac{n!}{(n-r+1)!(r-1)!}-2\frac{n!}{(n-r)!r!}+\frac{n!}{(n-r-1)(r+1)!}=0$$ Multiplying by $(n-r+1)!(r+1)!$ and dividing by $n!$, $$r(r+1)-2(n-r+1)(r+1)+(n-r+1)(n-r)=0$$
Two Variable Limit $\lim_{(x,y)\to(0,0)} \frac{6(1-\cos(xy))}{x^2y\sin(2y)}$
You have\begin{align}\lim_{(x,y)\to(0,0)}\frac{6(1-\cos(xy))}{x^2y\sin(2y)}&=6\lim_{(x,y)\to(0,0)}\frac{1-\cos(xy)}{(xy)^2\frac{\sin(2y)}y}\\&=3\lim_{(x,y)\to(0,0)}\frac{1-\cos(xy)}{(xy)^2}\times\frac{2y}{\sin(2y)}\\&=\frac32,\end{align}since$$\lim_{x\to0}\frac{1-\cos(x)}{x^2}=\frac12\quad\text{and}\quad\lim_{y\to0}\frac y{\sin y}=1.$$
translation and dilation invariance of borel sets
Proofs of things about $\sigma$-algebras often begin by saying "Let $A=$..." and proceed by showing that $A$ is a $\sigma$-algebra. Here: Let $A$ be the collection of all sets $E$ such that every translate of $E$ is a Borel set and every dilate of $E$ is a Borel set. Show that $A$ is a $\sigma$_algebra. Since open intervals are in $A$ and the Borel sets are the smallest $\sigma$-algebra containing the open intervals it follows that every Borel set is in $A$.
Proving conclusion c when give premises
Also, P2 p⟹(q∧r) is equivalent to ¬(q∧r)⟹¬p. Since you have P3 ¬(q∧r), you have immediately ¬p. You do not need P1 r⟹s.
Real line bundle smoothly isomorphic to Möbius bundle
The bundle chart neighborhoods were given to you in the exercise setup, $U$ and $V$. To define the smooth trivializations of $F$, remember that you want a real line bundle, so the fiber is $\mathbb{R}$, and the neighborhoods are called trivializations for a reason. With the transition functions $\tau$, can construct $F$ (it is a theorem you have probably seen in your class that transition functions with certain conditions + trivializations totally characterize a vector bundle). You are set after taking the maximal bundle atlas for $F$ generated by your charts. Now you need to construct an isomorphism between $F$ and the Möbius band. This is so painfully simple it's hard. I can't give details since you haven't told us how your professor defined the Möbius strip in class, but you'll probably need to construct maps from $U$ and $V$ into the Möbius band, show that precomposition by $\tau$ transforms them to one another, so that you have a well-defined map of fiber bundles. Then you're done with the hard work, but you will probably still need to chase definitions to show that this is an isomorphism on each fiber, it covers a homeomorphism of $S^1$, and so forth.
If $n\Delta s_n=o(1)$ is satisfied then $n\log n \Delta s_n=o(1) $
Consider $s_n=\sum_{k=2}^n\frac 1{k\log k}, n\ge 2.$ Then $\Delta s_n=\frac 1{n\log n},n\ge 3.$ Thus $n\Delta s_n=\frac{1}{\log n}=o(1).$ But then we have that $n\log n\Delta s_n=1\ne o(1).$
Matrix for recurrence relation
Let $A$ be your matrix $$A = \begin{bmatrix} a & b & c\\1 & 0 & 0\\0 & 1 & 0 \end{bmatrix}$$ Starting with given values $f(0),f(1),f(2)$, we can find any $f(n)$ for $n \geq 3$ using matrix exponentiation, i.e. denoting $$x_n = \begin{bmatrix} f(n) & f(n-1) & f(n-2) \end{bmatrix}$$ we have $$x_n = Ax_{n-1}=A^2x_{n-2} = \ldots =A^{n-2}x_2$$ Written differently, $$\begin{bmatrix} f(n) \\ f(n-1) \\ f(n-2) \end{bmatrix} = \begin{bmatrix} a & b & c\\1 & 0 & 0\\0 & 1 & 0 \end{bmatrix}^{n-2}\begin{bmatrix} f(2) \\ f(1) \\ f(0) \end{bmatrix} $$ which means you can generate any window of size three of $f$, given $a,b,c,f(2),f(1),f(0)$.
Is $\frac{\partial}{\partial x_1}=\frac{\partial}{\partial y_1}$ if $x_1=y_1$?
Think the answer is no. There Is nothing wrong with your example. There are plenty of examples where this could happen, in fact this is $\textbf{warned}$ in standard text in differential geometry e.g $\textbf{Lee's Smooth Manifold}$ page 65, by consider the coordinates $(x,y)$ and $(\tilde{x},\tilde{y})$ in $\mathbb{R}^2$ related by $$ \tilde{x} = x, \qquad \tilde{y} = y+x^3 $$ Evaluate at $p=(x,y)=(1,0) \in \mathbb{R}^2$ will gives $$ \frac{\partial}{\partial x}\Big|_p \neq \frac{\partial}{\partial \tilde{x}}\Big|_p. $$ Which is tell us that $\frac{\partial}{\partial x^i}\Big|_p$ depend on the entire coordinate system, not just $x^i$.
Use any test to determine convergence of the series $\sum_{n=1}^\infty \frac{(-1)^ne^{\frac{1}{n}}}{n^3}$
Comparison test goes through: $e^{1/n}\leq 3$ so $\left|(-1)^{n}\dfrac{e^{1/n}}{n^{3}}\right|\leq\dfrac{3}{n^{3}}$.
Sum of a binomial sequence equation
It can be written as $$x^{-5}\sum_{n=0}^5 \binom{5}{n}(-1)^n x^{10-2n}$$ $$x^{-5}\sum_{n=0}^5 \binom{5}{n}(-1)^n (x^2)^{5-n}$$ $$=x^{-5}(x^2-1)^5$$ $$=(\frac{x^2-1}{x})^5=32$$ which then can be easily solved.
Pulling Cards in Consecutive and Ascending Order
You forgot to take into account the probability of getting that first $1$ (or $2$, or $3$, or $4$) in the first place: after all, your first card might be a $5$ or $6$, in which case you’re out of luck right away. If your first card is $1$, then the probability of drawing the $2$ and $3$ in that order is $\frac15\cdot\frac14$, but the probability of drawing the $1$ in the first place is $\frac16$, so the overall probability of the $123$ sequence is $\frac16\cdot\frac15\cdot\frac14$. Thus, you really end up with $4\cdot\frac16\cdot\frac15\cdot\frac14=\frac16\cdot\frac15=\frac1{30}$, just as before. You could also argue like this: the probability of getting a usable card (meaning $1,2,3$, or $4$) on the first draw is $\frac46=\frac23$. Once you’ve drawn it, the probability of drawing the next card in ascending sequence on the second draw is $\frac15$, and the probability of drawing the right card on the third draw is $\frac14$, so the overall probability of success is $\frac23\cdot\frac15\cdot\frac14=\frac1{30}$.
Inverse of the Jordan block matrix
Your matrix $J_\lambda(n)=\lambda I+N$ where $$N=\pmatrix{0&1&0&\cdots&0&0\\ 0&0&1&\cdots&0&0\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ 0&0&0&\cdots&1&0\\ 0&0&0&\cdots&0&1\\ 0&0&0&\cdots&0&0 }.$$ Then $N$ is nilpotent: $N^n=0$ and so $I+tN$ will have the inverse $I-tN+t^2N^2-\cdots +(-t)^{n-1}N^{n-1}$. Then $$J_\lambda(n)^{-1}=\lambda^{-1}(1+\lambda^{-1}N)^{-1} =\lambda^{-1}(I-\lambda^{-1} N+\lambda^{-2}N^2-\cdots +(-\lambda)^{-n+1}N^{n-1}).$$
Exercise measure theory: properties of a function
That depends a bit on which definition of integrals you're using. I take the liberty to use the simplest for the purpose - sloppily expressed the area (or measure) of the region under the graph (and taking it negative if below the x-axis). The (*) should be rather straight forward using this definition as it's basically the same set we're measuring. You have to do the outer-inner measurement construct first to see this though. The (**) follows from that we can use the estimate $f(k+h)-f(k)$ to be between $|h||A_k|$ and $|h||A_{k+h}|$. If you're not using this definition it will be required to prove that $\int f(x)dx = |P|-|N|$ where $P = \{x, y: 0\le y\le f(x)\}$ and $N=\{x, y: f(x)\le y \le 0\}$.
Calculating economic profit margins
... if their margins are $50\%$ then their costs are $50 \%⋅\text{revenue}$. That is true. It can be shown mathematically $\text{margin}=1 - \frac{\text{costs}}{\text{revenue}} $ Now we say that the costs are the half of the revenue. We can replace costs by $0.5\cdot revenue$ $\text{margin}=1 - \frac{0.5\cdot \text{revenue}}{\text{revenue}} $ The revenue is cancelling out $\text{margin}=1 - \frac{0.5}{1}=1-0.5=0.5 $ The margin is indeed $50\%$.
Why is the Ehrenfeucht theory complete?
Use quantifier elimination in $T$. Since the language $\{<\} \cup \{c_i:i<\omega\}$ contains constants, it will follow that $T$ is complete. To show that $T$ actually has quantifier elimination, use the following theorem (Theorem 3.2.5 in Tent-Ziegler): $T$ has quantifier elimination if and only if for all models $\mathcal M$ and $\mathcal N$ of $T$ with a common substructure $\mathcal A$ and for all primitive existential formulas $\varphi(x_1,\ldots,x_n)$ and parameters $a_1,\ldots,a_n$ from $\mathcal A$ we have $\mathcal M \models \varphi(a_1,\ldots,a_n) \Rightarrow \mathcal N \models \varphi(a_1,\ldots,a_n)$. A primitive existential formula takes the form $\exists v \psi(v,w_1,\ldots,w_n)$ where $\psi$ is a conjunction of atomic formulas and negated atomic formulas. If $\psi(v,w_1,\ldots,w_n) \rightarrow v=w_i$ or $\psi(v,w_1,\ldots,w_n) \rightarrow v = c_i$ for some $i$, then you can argue directly that the condition from the theorem holds. Without loss of generality, every other kind of satisfiable primitive existential formula $\exists v \psi(v,w_1,\ldots,w_n)$ takes the form $$\exists v \bigwedge_{f(i)=0} v<w_i \wedge \bigwedge_{f(i)=1} w_i<v \wedge \bigwedge_{j \le k} c_j<w \wedge \bigwedge_{j>k} w<c_j$$ where $f$ is a convenient partition of $\{1,\ldots,n\}$. Using the fact that $T$ is the theory of dense linear orders, you can now argue that $\mathcal M \models \exists v \psi(v,a_1,\ldots,a_n) \Rightarrow \mathcal N \models \exists v \psi(v,a_1,\ldots,a_n)$. I've elided over some details, but this should be a workable outline of an argument.
In normed space, can a countable dense subset in closed unit ball of X be finite?
$X$ is a normed space, so it is Hausdorff. In particular, $C$ finite implies $C$ is closed. $C$ dense means by definition $C = \bar C = B_X $. Thus $B_X$ is finite. For any non-trivial vector-space, any subset with non-empty interior is infinite. Hence it is only possible when $X$ has dimension $0$.
Show $\int_X \limsup\limits_{n\to\infty} \mathbb{1}_{A_n} d\mu=0$ for $\sum_{n\in\mathbb N} \mu(A_n)<\infty$
This is a purely set-theoretical result: note that $A=\limsup A_n$ is $A=\bigcap\limits_nB_n$ with $B_n=\bigcup\limits_{k\geqslant n}A_k$, in particular, for every $n$, $A\subseteq B_n$, hence $\mu(A)\leqslant\mu(B_n)$. Now, $\mu(B_n)\leqslant\sum\limits_{k=n}^\infty\mu(A_k)$ hence $\mu(A)\leqslant\sum\limits_{k=n}^\infty\mu(A_k)$ for every $n$. As the rest of a converging series, $\sum\limits_{k=n}^\infty\mu(A_k)\to0$ when $n\to\infty$, hence $\mu(A)=0$.
Finding the bounds of a triple integral (spherical coordinates)
$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\on}[1]{\operatorname{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ $\ds{\underline{"Outside"}\,\,\, Volume\ {\Large\cal V}_{\mrm{out}} \pars{~\mbox{Cylindrical Coordintaes}~}}$. Hereafter, $\ds{\bracks{\cdots}}$ is an Iverson Bracket. Namely, $\ds{\bracks{P} = \left\{\begin{array}{rcl} \ds{1} &amp; \mbox{if}\ P\ \mbox{is}\ true. \\[1mm] \ds{0} &amp; \,\,\,\,\,\mbox{if}\ P\ \mbox{is}\ false. \end{array}\right.}$ \begin{align} {\Large\cal V}_{\mrm{out}} &amp; \equiv \bbox[5px,#ffd]{\iiint_{\mathbb{R}^{3}}\bracks{x^{2} + y^{2} &gt; 1}\bracks{x^{2} + y^{2} + z^{2} &lt; 4} \dd^{3}\vec{r}} \\[5mm] &amp; = \iiint_{\mathbb{R}^{3}}\bracks{\rho^{2} &gt; 1} \bracks{\rho^{2} + z^{2} &lt; 4} \rho\,\dd\rho\,\dd \phi\,\dd z \\[5mm] &amp; = \iiint_{\mathbb{R}^{3}} \bracks{1 &lt; \rho^{2} &lt; 4 - z^{2}} \rho\,\dd\rho\,\dd \phi\,\dd z \\[5mm] &amp; = \iiint_{\mathbb{R}^{3}} \bracks{1 &lt; 4 - z^{2}} \bracks{1 &lt; \rho &lt; \root{4 - z^{2}}} \\[2mm] &amp;\ \rho\,\dd\rho\,\dd \phi\,\dd z \\[5mm] &amp; = \iiint_{\mathbb{R}^{3}} \bracks{\verts{z} &lt; \root{3}} \bracks{1 &lt; \rho &lt; \root{4 - z^{2}}} \\[2mm] &amp; \rho\,\dd\rho\,\dd \phi\,\dd z \\[5mm] &amp; = 2\pi\int_{-\root{3}}^{\root{3}} \int_{1}^{\root{4 - z^{2}}}\rho\,\dd\rho\,\dd z \\[5mm] &amp; = \pi\int_{-\root{3}}^{\root{3}}\pars{3 - z^{2}}\dd z \\[5mm] &amp; = \bbx{4\root{3}\pi} \approx 21.7656 \\ &amp; \end{align}
Who is credited for this theorem on bounding boxes for an ellipse?
The result is perhaps called the Director Circle. Is it like this? A special case shown.. Director Circle Ellipse
Twin prime "test" via congruence
Your logic is not reversible, so the congruence you derive is a necessary condition, not an if-and-only-if. For instance, there is no hope of deducing (1) from (3), because you have eliminated any information there was about $(n-1)!$ from (3).
Why is $\int_0^L [\cos(\frac{n \pi x}{L} - \frac{m \pi x}{L}) - \cos(\frac{n \pi x}{L} + \frac{m \pi x}{L})]dx = L $?
If $n = m$ then $$\begin{aligned} \int_0^L \cos\left(\frac{(n-m)\pi x}{L}\right) dx &amp;= \int_0^L \cos(0) dx \\ &amp;= \int_0^L 1\ dx \\ &amp;= L \end{aligned}$$ Note that if $n+m=0$ then (substituting $n=m$) we have $2n = 0$, so $n=0$, and similarly, $m=0$. Since this is not the case, we must have $n+m \neq 0$. Then: $$\begin{aligned} \int_0^L \cos\left(\frac{(n+m)\pi x}{L}\right) dx &amp;=\frac{L}{(n+m)\pi}\left.\sin\left(\frac{(n+m)\pi x}{L}\right)\right|_{x=0}^{L} \\ &amp;= \frac{L}{(n+m)\pi}\sin\left((n+m)\pi\right) \\ &amp;= 0 \end{aligned}$$ since $\sin(k\pi) = 0$ for any integer $k$.
Cauchy problem for $xy'+y=y^2$
It can be solved for y explicitly in this case. I don't have my computer on me so typesetting is tough but it goes like this: exponentiate both sides, raise both sides to the -1 power, and break up the fraction. Then you get something that is easy to solve for y.
number of vertices in G
Hint: A tree is a connected graph with no cycles. The following facts are useful: A tree with $n$ vertices must have $(n-1)$ edges A connected graph is a tree if and only if removing any edge will disconnect the graph If we are to have as many vertices as possible, then each of the $6$ components of the graph must be a tree. Suppose that the components have $E_1,E_2,\dots,E_6$ many edges respectively. The total number of edges is 30, which is to say that $$ E_1 + E_2 + \cdots + E_6 = 30 $$ Since each component is a tree, the number of vertices in the $i$th component must be $E_i + 1$. So, the total number of vertices is $$ (E_1 + 1) + (E_2 + 1) + \cdots + (E_6 + 1) $$ which is necessarily $36$ (why?).
Decreasing sequence $(f_n)_{n\geq 1}$ of positive measurable functions $f_n \downarrow f$ in $(0,1]$ or $(0,1)$.
$f_n(x)=\frac1 {nx}, f(x)=0$ in $(0,1)$.
Find $\lim_{(x,y) \to (0,0)} \frac{\tan(x^3+y^3)}{\sin(x^2+y^2)}$
Note that $$ \frac{\tan(x^3 + y^3)}{\sin(x^2 + y^2)} = \frac{1}{\cos(x^3+y^3)} \frac{\sin(x^3 + y^3)}{\sin(x^2 + y^2)} = \underbrace{\frac{1}{\cos(x^3+y^3)}}_{\text{(1)}} \underbrace{\frac{\sin(x^3 + y^3)}{x^3 + y^3}}_{\text{(2)}} \underbrace{\frac{x^2 + y^2}{\sin(x^2 + y^2)}}_{\text{(3)}} \underbrace{\frac{x^3 + y^3}{x^2 + y^2}}_{\text{(4)}}.$$ The first term in the product is continuous at $(x,y) = (0,0)$. The second term is of the form $g(p(x,y))$ where $$ g(z) = \begin{cases} \frac{\sin(z)}{z} &amp; z \neq 0 \\ 1 &amp; z = 0 \end{cases} $$ is continuous and $p(x,y) \rightarrow 0$ as $(x,y) \rightarrow (0,0)$ and so tends to $g(0)$. The third term is also of the form $g(p(x,y))$ for a different $g$ and $p$. The last term can be dealt easily using polar coordinates.
Does there exist a prime number within the interval?
This is true indeed. First consider $n$ s.t. $p_n \geq 29$. It has been proven that for $m \geq 3$, there is a prime in the interval $(m, \frac{4(m+2)}{3})$, see here, Corollary 2.2. Since $p_n \geq 3$, there is a prime in $(p_n, \frac{4(p_n+2)}{3})$, so $p_{n+1}&lt;\frac{4(p_n+2)}{3}$. Now $$\sqrt{2(p_{n+1}^2-1)}&lt;\sqrt{2}p_{n+1}&lt;\frac{4\sqrt{2}(p_n+2)}{3}&lt;\left\lceil \frac{4\sqrt{2}(p_n+2)}{3} \right\rceil$$ Since $\left\lceil \frac{4\sqrt{2}(p_n+2)}{3} \right\rceil&gt;\frac{8\sqrt{2}}{3} \geq 3$, there is a prime in $\left(\left\lceil \frac{4\sqrt{2}(p_n+2)}{3} \right\rceil, \frac{4(\left\lceil \frac{4\sqrt{2}(p_n+2)}{3} \right\rceil+2)}{3}\right)$. Thus $\exists p_m$ s.t. $$\sqrt{2(p_{n+1}^2-1)}&lt;\left\lceil \frac{4\sqrt{2}(p_n+2)}{3} \right\rceil&lt;p_m&lt;\frac{4(\left\lceil \frac{4\sqrt{2}(p_n+2)}{3} \right\rceil+2)}{3}&lt;\frac{4(\frac{4\sqrt{2}(p_n+2)}{3}+3)}{3}$$ Now since $p_n \geq 29$, we have $$(3p_n-4)-(\frac{4(\frac{4\sqrt{2}(p_n+2)}{3}+3)}{3})=(3-\frac{16 \sqrt{2}}{9})p_n-(8+\frac{32\sqrt{2}}{9})&gt;0$$ Thus $$\sqrt{2(p_{n+1}^2-1)}&lt;p_m&lt;\frac{4(\frac{4\sqrt{2}(p_n+2)}{3}+3)}{3}&lt;3p_n-4$$ It remains to check $5 \leq p_n&lt;29$. Checking small cases: $p_n=5$: The interval is $(\sqrt{96}, 11]$ so $p_m=11$ works. $p_n=7$: The interval is $(\sqrt{240}, 17]$ so $p_m=17$ works. $p_n=11$: The interval is $(\sqrt{336}, 29]$ so $p_m=29$ works. $p_n=13$: The interval is $(\sqrt{576}, 35]$ so $p_m=31$ works. $p_n=17$: The interval is $(\sqrt{720}, 47]$ so $p_m=47$ works. $p_n=19$: The interval is $(\sqrt{1056}, 53]$ so $p_m=53$ works. $p_n=23$: The interval is $(\sqrt{1680}, 65]$ so $p_m=61$ works.
Computing the curvature of a surface through the area of a geodesic disk
No, $R_{ij} = \sum R_{ikjk}$ is the Ricci tensor here. You've got a typo up there or else they have one in the paper. (Of course, for a surface, Ricci curvature is just the scalar (Gaussian) curvature.)
How to prove that $c=(Id-B)A^{-1}b \hspace{1cm} $ if $x^{k+1}=Bx^{k}+c$ converges to the solution of $Ax=b$
Taking limit on $x_{n+1}=Bx_n+c$ as $n\to \infty$ $$x=Bx+c\Rightarrow c=(I-B)x=(I-B)A^{-1}b.$$
Suppose $f(x)$ is continuous at $0$ and $f(0) = 0$. Prove that $\,x f(x)$ is differentiable at $0$.
Let $g(x) = x\cdot f(x)$. By the definition of differentiability, we want the following limit to converge $$ \lim_{h\to0} \frac{g(h) - g(0)}{h} = \lim_{h\to0} \frac{h\cdot f(h) - 0\cdot f(0)}{h} = \lim_{h\to0} \frac{h\cdot f(h)}{h} = \lim_{h\to0} f(h) = 0 $$ Note the last step requires us to know $f$ is continuous (really we just need to know the limit exists, and continuity implies this).
Subgroup notation on Wikipedia
As mentioned in the comment by Derek Holt, this is the notation used in the book "ATLAS of Finite Groups". In particular, $2^{1+24}.Co_1$ means a group with a normal subgroup $N$ that is an extraspecial group of order $2^{25}$ and such that the quotient group $G/N$ is isomorphic to the first Conway simple group, $Co_1$.
Show that A*B and B*A have the same order
Hint: $(ab)^n = ababab\cdots ab = abab\cdots abaa^{-1} = a(baba\cdots ba)a^{-1} = a((ba)^n)a^{-1}$
Iteration of a parabolic transformation
If there exists a point $x\in\overline{B^n}$ such that $\lim_{m\rightarrow\infty}\phi^m(x)\neq a$ then there is some positive real number $r$ so that $d_{\overline{B^n}}(\phi^m(x),a)&gt;r$ for infinitely many $m&gt;0$. Enumerate the corresponding exponents $m_i$. This gives us a sequence of points $(\phi^{m_i}(x))_{i&gt;0}$ in the complement of the open $r$-ball about $a$. This complement is compact, so there is a convergent subsequence, with limit $z\neq a$. By continuity of $\phi$, the point $z$ must be fixed, contradicting the fact that $\phi$ is parabolic.
Software to solve equation
You don't need software to solve the equation. It is of the form $$A = B + \frac{2000}{(1+x)^{15}},$$ where $A, B &gt; 0$ are constants independent of $x$, defined by the sums you have above. The solution is $$x = \left(\frac{2000}{A-B}\right)^{1/15} - 1.$$
Differentiability of the function $f(x,y)=\begin{cases} \frac{xy}{x^2+y^2}, & \text{if $(x,y)\ne(0,0)$}\\ 0, & \text{otherwise} \end{cases}$
Yes you are right the function is not continuous and then it is not differentiable at $(x,y)=(0,0)$, since differentiability $\implies$ continuity. Remarks: as a simpler alternative, to show that $f(x,y)$ is not continuos at the origin, it suffices to consider the paths with $x=0$ and $x=y$. the existence of the partial derivative is not sufficient to guarantee differentiability indeed for that we need also that at least one partial derivative is also continuos at that point.
Entire function is polynomial of degree $n$ iff $z^{n}f(1/z) \to \alpha$ as $z \to 0$.
Since $$ \begin{align} \alpha &amp;=\lim_{z\to0}z^nf\left(\frac1z\right)\\ &amp;=\lim_{z\to\infty}\frac1{z^n}f(z) \end{align} $$ as $z\to\infty$ $$ f(z)=\alpha z^n+o\!\left(z^n\right) $$ Thus, for all $k\gt n$, we have $$ \begin{align} f^{(k)}(0) &amp;=\frac{k!}{2\pi i}\oint_{|z|=R}\frac{f(z)}{(z-0)^{k+1}}\,\mathrm{d}z\\ &amp;=\lim_{R\to\infty}\frac{k!}{2\pi i}\oint_{|z|=R}\frac{f(z)}{z^{k+1}}\,\mathrm{d}z\\ &amp;=\lim_{R\to\infty}\frac{k!}{2\pi i}\oint_{|z|=R}\frac{\alpha z^n+o\!\left(z^n\right)}{z^{k+1}}\,\mathrm{d}z\\[6pt] &amp;=0 \end{align} $$ Therefore, the Taylor series has degree at most $n$. Furthermore, $$ \begin{align} f^{(n)}(0) &amp;=\frac{n!}{2\pi i}\oint_{|z|=R}\frac{f(z)}{(z-0)^{n+1}}\,\mathrm{d}z\\ &amp;=\lim_{R\to\infty}\frac{n!}{2\pi i}\oint_{|z|=R}\frac{f(z)}{z^{n+1}}\,\mathrm{d}z\\ &amp;=\lim_{R\to\infty}\frac{n!}{2\pi i}\oint_{|z|=R}\frac{az^n+o\!\left(z^n\right)}{z^{n+1}}\,\mathrm{d}z\\[6pt] &amp;=\alpha n! \end{align} $$ Thus, the coefficient of $z^n$ in $f(z)$ is $\alpha$. Therefore, $f$ has degree $n$.
Topological group needs to be path connected
You don't need to assume that a topological group is path connected. Normally in the definition of the fundamental group you have to pick a base point. However the fundamental group of a topological group does not depend on the choice of the base point. That is because for any $a, b\in G$ there is a homeomorphism $f:G\to G$ that maps $a\mapsto b$ namely $f(x)=a^{-1}x$. This homeomorphism induces isomorphism of $\pi(G, a)\simeq \pi(G, b)$. Actually the same is true for any space that has this property (i.e. the existance of such homeomorphism for any two points). So that's why the notion $\pi(G)$ makes sense. Also it is convenient to always take $\pi(G, e)$ where $e$ is the neutral element of $G$. That's because the path connected component of $e$ is a closed normal subgroup of $G$. So from the other point of view you can always assume that $G$ is path connected.
law of large numbers and renewal processes
$$\forall n\geqslant n_\varepsilon,\ a(1-\varepsilon)\leqslant\frac{S_n}n\leqslant a(1+\varepsilon)\implies\forall t\geqslant a(1+\varepsilon)n_\varepsilon,\ \frac1{a(1+\varepsilon)}\leqslant\frac{N_t}t\leqslant\frac1{a(1-\varepsilon)}$$
$\pi \cot (\pi z) = \frac{1}{z} + \sum_{n \in \mathbb{Z},\ n \neq 0} \frac{1}{z-n}+ \frac{1}{n}$
Consider the function $$d(z) = \pi \cot \pi z - \frac{1}{z-1}$$ in a neighbourhood of $1$. Writing $z = 1+h$, we have $$\begin{align} d(1+h) &amp;= \pi\frac{\cos \pi(1+h)}{\sin \pi(1+h)} - \frac{1}{h}\\ &amp;= \pi \frac{-\cos \pi h}{-\sin \pi h} - \frac{1}{h}\\ &amp;= \frac{1 - \frac{\pi^2h^2}{2} + O(h^4)}{h\left(1 - \frac{\pi^2h^2}{6}+O(h^4)\right)} - \frac{1}{h}\\ &amp;= \frac{1}{h}\left(1- \frac{\pi^2h^2}{2}+O(h^4)\right)\left(1 + \frac{\pi^2h^2}{6}+O(h^4)\right) - \frac{1}{h}\\ &amp;= \frac{1}{h}\left(1-\frac{\pi^2h^2}{3} + O(h^4)\right) - \frac{1}{h}\\ &amp;= -\frac{\pi^2h}{3} + O(h^3), \end{align}$$ so $d$ has a removable singularity in $1$, and after removing it, we have $d(1) = 0$. Then it remains to see that $$e(z) = \left(\frac{1}{z} + \sum_{n\neq 0} \frac{1}{z-n}+\frac{1}{n}\right) - \frac{1}{z-1} = \frac{1}{z} + 1 + \sum_{n\notin \{0,1\}} \frac{1}{z-n} + \frac{1}{n}$$ also has a zero in $1$. Having cancelled the $\frac{1}{z-1}$ term, we can compute $e(1)$ simply by substituting $1$ for $z$ in the last representation. The sums nicely telescope then and yield the expected result. Since $h(z) = d(z) - e(z)$, that concludes the proof.
Number of elements of $A_k = \{ B \subset \mathbb N : \mu(B) = k \}$, $ k \in \mathbb R $
For any $k&gt;0$, $A_k$ has cardinality $\mathfrak{c} = 2^{\aleph_0}$ : let $N_0$ be the set of even integers and $N_1$ be the set of odd integers. Since $\sum_{n \in N_0} u_n = \sum_{n \in N_1} u_n = \infty$, it is again the case that you can obtain any real as a measure of a subset of $N_0$ or as a subset of $N_1$ : $\mu( \mathcal P (N_0)) = \mu( \mathcal P (N_1)) = [0, + \infty]$. Let $0&lt;l&lt;k$, and choose a subset $X_0$ of $N_0$ such that $\mu(X_0) = l$, and a subset $X_1$ of $N_1$ such that $\mu(X_1) = k-l$. Then, $\mu(X_0 \cup X_1) = l+(k-l) = k$. Since for any fixed value of $k$ we can choose $2^{\aleph_0}$ many values for $l$, we obtain $2^{\aleph_0}$ many distincts subsets $X$ of $\mathbb N$ such that $\mu(X)=k$. Meanwhile, $A_k$ is at most of cardinality $2^{\aleph_0}$ since it is a subset of $\mathcal P(\mathbb N)$. Thus $A_k$ has cardinality $2^{\aleph_0}$
<Finding the volume in a cylinder that is intersected by a plane
First of all, you made a mistake in your second vector when computing the direction of the plane: $$PR=(-8,-4,2)$$ and your cross product doesn't look correct even with your vector $PR$. Now the volume can be set up like this: $$\iint_S z dxdy$$ where $S$ is the projection of the volume to the $xy$ plane and $z$ is the upper limit of $z$ which is the plane represented by $z$. In fact, you will need to check whether the plane is below $z=4$ in the confined region. After some trick you will see the plane is really below $z=4$. That's why the $z=4$ would not be in your integral any more. The triple integral is $$\int\int\int_0^z dzdxdy $$ where $z$ is exactly the plane. That is why I wrote the double integral.
Solving $(1+ r/100)^t =2$ for the variable $r$
Rewritten: $$ \begin{align} \left(1+\frac{r}{100}\right)^t&amp;=2\\ 1+\frac{r}{100}&amp;=\sqrt[t]{2}\\ \frac{r}{100}&amp;=\sqrt[t]{2}-1\\ r&amp;=100\left(\sqrt[t]{2}-1\right)\\ &amp;=\Large100\left(2^\frac{1}{t}-1\right). \end{align} $$ $$\\$$ $$\Large\color{blue}{\text{# }\mathbb{Q.E.D.}\text{ #}}$$
Finding zeroes of $x^3-5x^2+11x+17$
Note: Let $f(x)=x^3-5x^2+11x+17$. $f(-1)=0\Rightarrow (x+1)|f(x)$. Divide, and get a quadratic factor. I'm sure you'll be able to find the other two roots then.
How many pairs of $(x, y)$ satisfied this equation
Factor it as $xy(x+y)=30$. Write $30=2\times3\times5$. Therefore $y$ has to be a factor of $30$. So $y=\pm1,\pm2,\pm3,\pm5,\pm6,\pm10,\pm15,\pm30$. However, if $y\geq6$ and $x\geq1$ then $xy(x+y)\geq42$. If $y\geq6$ and $x\leq-1$ then we must also have $x\leq-7$ since otherwise one factor is negative and two are positive. Then also $xy(x+y)\geq42$. $y=1$ gives $x(x+1)=30$, this gives $x=5 \vee x=-6$. $y=2$ gives $2x(x+2)=30$, this gives $x(x+2)=15$, thus $x=3$ or $x=-5$. $y=3$ gives $3x(x+3)=30$, this gives $x(x+3)=10$, thus $x=2$ or $x=-5$. $y=5$ gives $5x(x+5)=30$, this gives $x(x+5)=6$, thus $x=1$ or $x=-6$. $y=-1$ gives $-x(x-1)=30$, this gives $x(x-1)=-30$, which has no solutions. $y=-2$ gives $-2x(x-2)=30$, this gives $x(x-2)=-15$, which has no solutions. $y=-3$ gives $-3x(x-3)=30$, this gives $x(x-3)=-10$, which has no solutions. $y=-5$ gives $-5x(x-5)=30$, this gives $x(x-5)=-6$, thus $x=2$ or $x=3$. $y=-6$ gives $-6x(x-6)=30$, this gives $x(x-6)=-5$, thus $x=1$ or $x=5$. $y=-10$ gives $-10x(x-10)=30$, this gives $x(x-10)=-3$, which has no real solutions. $y=-15$ gives $-15x(x-15)=30$, which has no real solutions. $y=-30$ gives $-30x(x-30)=30$, which has no real solutions. Conclusion: We get the pairs $(5,1), (1,5), (-6,1), (1,-6), (2,3), (3,2), \\ (2,-5), (-5,2), (5,-6), (-6,5), (3,-5), (-5,3)$
What is the derivative of $x^n$?
That depends on your definition of $x^n$ for irrational $n$. For instance, if your definition is $x^n=e^{n\ln x}$ then it can be solved by applying the chain rule, the definition of the exponential function as the function that satisfies $f'(x)=f(x)$, and lastly the derivative of the logarithm.
A hand of six cards is dealt from a standard poker deck. Find formula for p_(XYZ) (x,y,z).
The numerator should be $$\binom{4}{x}\binom{4}{y}\binom{4}{z}\binom{40}{6-x-y-z},$$ for any $x,y,z$ with $x+y+z\le 6$, with the obvious restrictions on $x,y,z$.
Prove $\frac{1}{n^{p+1}}<\frac{1}{p}\left[\frac{1}{(n-1)^p}-\frac{1}{n^p}\right]$
Give $p \in (0,+\infty)$. Consider $f(x)=\dfrac{1}{x^p},\ x&gt;1$. According to the Lagrange theorem, for $x&gt;1$, exist $c \in (x-1,x)$ that $$\dfrac{f(x-1)-f(x)}{x-1-x}=f'(c).$$ This yields $$\dfrac{1}{(x-1)^p}-\dfrac{1}{x^p}=\dfrac{p}{c^{p+1}} &gt; \dfrac{p}{x^{p+1}}.$$ Therefore $$\dfrac{1}{x^{p+1}} &lt; \dfrac{1}{p}\left[\dfrac{1}{(x-1)^p}-\dfrac{1}{x^p}\right] (*).$$ Replace $x=n \ (n \in \mathbb{N})$ in (*) we have $$\dfrac{1}{n^{p+1}} &lt; \dfrac{1}{p}\left[\dfrac{1}{(n-1)^p}-\dfrac{1}{n^p}\right].$$
Cardinality of Two Sets with Empty Sets
Your understanding is flawed, I’m afraid. When you count the elements of a set, you count just the elements of that set: you don’t count the elements of those elements separately, and there is nothing special about $\varnothing$ in this context. Let’s consider the set $x=\Big\{a,\{b\},\big\{c,\{d\}\big\}\Big\}$. It has $3$ elements: $a$, $\{b\}$, and $\big\{c,\{d\}\big\}$, so its cardinality is $3$. That last element happens to be a set with $2$ elements of its own, but it’s still just one member of $x$. We could replace it with the infinite set $\Bbb Z$ of all integers, getting the set $\big\{a,\{b\},\Bbb Z\big\}$, and we’d still have a set of cardinality $3$. Now the cardinality of $x$ is $3$ no matter what $a,b,c$, and $d$ are.1 In particular, it’s $3$ even if $a=b=c=d=\varnothing$, so that $x=\Big\{\varnothing,\{\varnothing\},\big\{\varnothing,\{\varnothing\}\big\}\Big\}$. It’s also $3$ if $a=b=c=d=\Bbb Z$, and $x=\Big\{\Bbb Z,\{\Bbb Z\},\big\{\Bbb Z,\{\Bbb Z\}\big\}\Big\}$. In the first case the $3$ elements of $x$ are $\varnothing$, $\{\varnothing\}$, and $\big\{\varnothing,\{\varnothing\}\big\}$; in the second they are $\Bbb Z$, $\{\Bbb Z\}$, and $\big\{\Bbb Z,\{\Bbb Z\}\big\}$. 1 That’s not quite true, but the two exceptions involve a technicality that beginners sometimes find confusing. Specifically, if $a=\{b\}$, then $$x=\Big\{a,a,\big\{c,\{d\}\big\}\Big\}=\Big\{a,\big\{c,\{d\}\big\}\Big\}$$ and has only $2$ elements. Similarly, if $a=\big\{c,\{d\}\big\}$, then $$x=\big\{a,\{b\},a\big\}=\big\{a,\{b\}\big\}$$ and again has only $2$ elements. In no case, however, does $x$ have $4$ elements.
Expected hitting time in a Markov chain
Let $y_n=E[X_n]$ then you have a linear recurrence relation with constant coefficients: $$y_{n+1}-2y_n+y_{n-1}=-2.$$ This system needs two boundary conditions. You only really have one, which is $y_0=0$. To fix that, you can artificially introduce $y_N=0$ (so that you are solving for the expected time to hit $0$ or $N$ from inside) and then send $N \to \infty$ (so that you are asymptotically guaranteed to hit $0$ first since $N$ is being moved further and further away). To solve the finite problem, the procedure is to split into a homogeneous and particular solution. For a particular solution, you may first guess that a constant might work, but you find you're wrong (you get $0=-2$). Next you may guess that a linear function might work, but in this special case with the probabilities being equal you are again wrong (you get $0=-2$). So you continue all the way to a quadratic function and get $$c \left ( (n+1)^2-2n^2+(n-1)^2 \right ) = c \left ( n^2 + 2n + 1 - 2n^2 + n^2 - 2n + 1 \right ) = 2c = -2$$ so $c=-1$. Thus a particular solution is $y_n=-n^2$. Next you need the general homogeneous solution. It turns out that you already found it in the course of doing the guesswork to find the particular solution: the general homogeneous solution is $y_n=c_1 + c_2 n$. So you have $$y_n=c_1 + c_2 n - n^2$$ where $c_1,c_2$ are to be found. Plugging in $0$ you get $c_1=0$. Plugging in $N$ you get $c_2 N - N^2 = 0$ so $c_2=N$. Thus $$y_n=N n - n^2.$$ Now what happens when you send $N \to \infty$ for $n&gt;0$ fixed? The case $q&gt;p$ is fairly similar. The differences are: In finding the particular solution, the linear function will actually work To get the homogeneous solution you will need to look for exponential solutions, i.e. the homogeneous solution will be of the form $c_1 \lambda_1^n + c_2 \lambda_2^n$.
Calculate the the length of the curve $\gamma(t)=(\frac{t^{2}}{4},\frac{t^{3}}{3},\frac{t^{4}}{4})$
Being sure that you did follow what Taby Mak answered, after computing $x'$, $y'$ and $z'$, you should arrive to $$||\gamma'(t)|| = \int_a^b \sqrt{t^6+t^4+\frac{t^2}{4}}\,dt$$ and factoring, you should notice that $$t^6+t^4+\frac{t^2}{4}=t^2 \left(t^2+\frac{1}{2}\right)^2$$ which then makes the problem vey simple.
Applying dominated convergence theorem on sum of random variables and indicator
Ok, since the $X_n$ are taking values in $\mathbb{N}$ we conclude that $0&lt;\mu&lt;1$. The strong law of large numbers tells us that $1-\frac{Y_n}{n}\to 1-\mu$ almost surely. Also, let's take a look at the event $\{k\leq Y_n\leq n\}=\{\frac{k}{n}\leq \frac{Y_n}{n}\leq 1\}$. If $\omega$ is some point where $\frac{Y_n(\omega)}{n}\to \mu$ (from the strong law it happens at almost every $\omega$) then the condition $0&lt;\mu&lt;1$ implies that we eventually have $\frac{k}{n}\leq\frac{Y_n(\omega)}{n}\leq 1$, or equivalently $1_{\{k\leq Y_n\leq n\}}(\omega)=1$ for every large enough $n$. So for such $\omega$ we have $1_{\{k\leq Y_n\leq n\}}\to 1$. We conclude that: $(1-\frac{Y_n}{n})1_{\{k\leq Y_n\leq n\}}\to 1-\mu$ almost surely Also, note that for every $n$ we have: $0\leq(1-\frac{Y_n}{n})1_{\{k\leq Y_n\leq n\}}\leq (1-\frac{k}{n})\leq 1$ So this allows us to use the dominated convergence theorem. We wouldn't be able to get that bound without the indicator. We indeed have $1-\frac{Y_n}{n}\to 1-\mu$ almost surely, but without the indicator we wouldn't be able to find a uniform bound on the sequence to use DCT.
Is a prime ideal in the polynomial ring over an algebraically closed field prime also in polynomial rings over extension fields?
$K[X_1,\dotsc,x_n]/J = K \otimes_k k[x_1,\dotsc,x_n]/I$ is the tensor product of two integral domains over an algebraically closed field, hence also an integral domain (see e.g. here). So we may replace $K$ by any $k$-algebra which is an integral domain.
How can I find the eigenvalues of $\alpha$?
From (1.), we have $\lambda c=0$ and $\lambda d=0$, so $\lambda=0$ is one solution. Otherwise, $c=0$ and $d=0$, and $c=\lambda b$, so $\lambda=0$ or $b=0$. But if $c=d=b=0$, then $\lambda a=b+c=0$, so $a=0$ too, so there are no eigenvalues besides $\lambda=0$.
Nearest point to origin of a hyperplane by Lagrange multiplier
HINT: You need solve this problem $$ \min_{x\in D}\frac{1}{2}\|x\|^2,\qquad D=\{x\in\mathbb{R}^n~:~x^{T}c-\beta=0\}. $$ For this, define the Lagrangian by $$ L(x,\lambda)=\frac{1}{2}\|x\|^2-\lambda(x^{T}c-\beta). $$ Imposing the condition $L_{x}(x,\lambda)=0$, we have $x=\lambda c$. As $x$ must belong to $D$ we have that $\lambda=-\beta/\|c\|^2$. So $x=-\frac{\beta c}{\|c\|^2}$. Remark: Check all the details that I did not do.
How can I solve this equation: $y=2.79 - \log(1/x)$?
$$-\ln\bigg(\frac 1x\bigg)=\ln x$$ So you can simplify to: $y-\ln(x)=2.79$ Hence we get two ways to solve this: $$\text{Given $x$ find $y$: }y=2.79+\ln x$$ $$\text{Given $y$ find $x$: }x=e^{y-2.79}$$
How to find expected value of a portion of the normal distribution?
Notations: Let $F$ and $f$ be the cdf and pdf of $X$ respectively. Let $\Phi$ and $\phi$ be the cdf and pdf of a standard normal variable. Part 1: Justification First of all can you calculate $P(a&lt;X&lt;b|X&gt;72)$? Yes you can! It is $\displaystyle \frac{P(a&lt;X&lt;b \cap X&gt;72)}{P(X&gt;72)}$. Now you can treat $Y:=X|X&gt;72$ as a random variable whose pdf (if exists) you would like to compute. First let us calculate the cdf (say $G(a)$). $$G(a)=P(X\le a|X&gt;72) =\frac{P(X\le a \cap X&gt;72)}{P(X&gt;72)}$$ Now make two cases. If $a\le 72$, then $X\le a \cap X&gt;72 =\emptyset$. And if $a&gt;72$, $$P(X\le a \cap X&gt;72)=P(72&lt;X\le a)=P(X\le a)-P(X\le 72)=F(a)-F(72)$$ Conclusion $$G(a)=\begin{cases}\frac{F(a)-F(72)}{P(X&gt;72)} &amp; a&gt;72\\ 0 &amp; \mbox{otherwise}\end{cases}$$ Hence this random variable $Y$ admits a pdf $g$ given by $$g(a)=\begin{cases}\frac{f(a)}{P(X&gt;72)} &amp; a&gt;72\\ 0 &amp; \mbox{otherwise}\end{cases}$$ Part 2: Calculation $$\begin{aligned} &amp; E(X|X&gt;72) =\int_{-\infty}^{\infty}xg(x)\,dx \\ &amp; = \frac1{P(X&gt;72)}\int_{72}^{\infty} x\frac1{2\sqrt{2\pi}}\exp\left(-\frac12\left(\frac{x-67}{2}\right)^2\right)\,dx \\ &amp; = \frac{\int\limits_{72}^{\infty} \frac{x-67}{2}\frac{1}{\sqrt{2\pi}}\exp\left(-\frac12\left(\frac{x-67}{2}\right)^2\right)\,dx+67\int\limits_{72}^{\infty}\frac1{2\sqrt{2\pi}}\exp\left(-\frac12\left(\frac{x-67}{2}\right)^2\right)\,dx}{P(X&gt;72)} \\ &amp; = \frac1{P(X&gt;72)}\int\limits_{\frac{(2.5)^2}2}^{\infty} 2\exp(-t)\,dt +67 \\ &amp; =67+\frac{2e^{-\frac{(2.5)^2}2}}{1-\Phi(2.5)} \end{aligned}$$ In the third line $\frac12\left(\frac{x-67}{2}\right)^2=t$ substitution was used. There may computational mistakes, but the trick is essentially present above. Part 3: Approximation You can use the following fact $$\lim_{x\to\infty} \frac{1-\Phi(x)}{\phi(x)/x}=1$$ Hence for large enough $x$, $1-\Phi(x) \approx \frac{\phi(x)}{x}$. Replacing this in the dinominator of the last answer, gives a good approximation.
Riemann integration of given function
Let $P$ be some partition and $[t_{i-1}, t_{i}]$ be a subinterval. There are rationals arbitrarily close to ${t_i}$, and, since $x \mapsto x^2$ is continuous and increasing, the $\sup$ of the image of $g:\mathbb{Q} \cap [t_{i-1}, t_{i}] \to \mathbb{R}$ given by $g(x) = x^2$ is $(t_i)^2$. On the other hand, if we consider irrationals, the $\sup$ is $(t_i)^3 \leq (t_i)^2$. It's quite clear then that the upper sums for a given partition are the same as the upper sums of $f:[0,1] \to \mathbb{R}$ given by $f(x) = x^2$ and so the upper integral (ie $\frac 13$) is the same as well. You can use a similar argument for the lower integral to get $\frac 14$ as the value. Then conclude that the function is not Riemann integrable.
partition of convex n-gon with triangles.
The triangulated polygon can also be regarded as a planar graph $G$, and then the condition that each vertex has an odd number of triangles incident to it is equivalent to there being an even number of edges from each vertex, i.e. each vertex of $G$ has even degree. There must be vertices of degree $4$ or more, otherwise the graph is a triangle and the result holds. Now we label the vertices counterclockwise around the boundary of the triangulated polygon as $1,2,3,\cdots n$ where the vertex labelled $1$ has degree at least $4$. So there are already edges of the form $(k,k+1)$ for $1 \le k \le n$ where the vertex labels are mod $n$ (so the edge $(n,n+1)$ is actually the edge $(n,1)$. the simple case We first make a (very) simplifying assumption: For each odd $k \le n-2$ there is, besides the edge $(k,k+1),$ another edge $(k,k+2).$ Note that this implies that vertex $k+1$ has degree $2$ since it is a vertex of one of the triangles in the polygon's triangulation. We will be constructing a path in the graph joining the odd numbered vertices, i.e. the path $$(1,3,5,\cdots, r)$$ where $r$ is the largest odd integer at most $n+1.$ We can always go one more step in this construction, since on arriving at a given odd vertex we have only "used up" one of the edges going to it, so there must be another going out besides the one along the boundary of the polygon. This implies also that the degree of each odd numbered vertex is at least $4.$ If $n$ is even, then $r=n+1$ and the path closes, i.e. is a cycle. On the other hand, if $n$ is odd then $r=n$ and the path does not close, and is a path in $G$ joining vertex $1$ to vertex $n$. To finish the simple case, we first take $n=2k$ even. As noted above, in this case the path $(1,3,\cdots n)$ is actually a cycle. We now delete from $G$ all the even numbered vertices (each of which has degree $2$) along with the edges joining them to their neighbors, forming a new graph $G'.$ We've deleted $k$ vertices, so that $G'$ has the remaining $k$ vertices. It also holds that each vertex of $G'$ has even degree; at each vertex of $G$ two edges have been removed, so the degree count at that vertex has decreased by $2$. Since as noted that degree was initially at least $4$ we see that each vertex of $G'$ has even degree, and also $G'$ is the triangulation of a polygon (vertices $1,3,5,...,n-1$). By induction then, $G'$ has say $k=3t$ vertices, so that $n=2\cdot 3t$ is a multiple of $3$ as desired, in this $n$ even case. Now if $n=2k+1$ is odd, the path is $1,3,5,...,n$ and after deletion of the even numbered vertices as before we have a graph $G'$ all of whose vertices besides the two ends $1,n$ of the path have even degree. Note that $k$ vertices have been deleted, and so there are now $k+1$ vertices in $G'.$ The next step is to adjoin a new vertex $N$ to $G'$ which connects to the two vertices $1$ and $n$. These are adjacent in the graph $G'$ with odd degree; with the new $N$ inserted, the resulting graph $G''$ now has all vertices with even degree, and is a triangulated polygon. It follows that $(k+1)+1=k+2=3r$ say, so that $k=3r-2$, and then since $n=2k+1$ we finally get $n=2(3r-2)+1=6r-3,$ a multiple of $3$ as desired for the case of odd $n$. The "non simple" case Here we have some vertex $V$ of degree at least $4$, for which there is not an edge connecting $V$ to $V+2$. We need now to be specific about the next choice. Consider a moving ray based at $V$ which initially passes through the next vertex $V+1$ and rotates toward the inside of the polygon. There will then be a first vertex $W$ it passes which is the other end of an edge from $V$. In this "nonsimple case" we're assuming that there are two or more vertices located between $V$ and $W$ going around the boundary of $G$ counterclockwise. We will now split $G$ by cutting it via the line segment $VW$. This makes two subgraphs $A,B$ where $A$ is the half containing $V+1.$ For each of $A,B$ we regard the edge $(V,W)$ as one of the edges of the subgraph (and the two vertices $V,W$ are also viewed as being in each subgraph). Consider graph $A$. Each of its vertices other than $V,W$ retain their degrees from $V$ and so have even degree. Vertex $V$ (as a vertex of $A$) has degree $2$ by construction. Therefore the degree of $W$ in $A$ is even also, since the number of vertices of odd degree in a graph is even. We see that $A$ satisfies the hypotheses of even degree vertices in a triangulated polygon, and also the number of vertices in $A$ is smaller than that in $G$ (the degree at $V$ being at least $4$). Conclusion: The number of vertices in $A$ is a multiple of $3,$ say $3w.$ This means the number of vertices between $V$ and $W$ along the polygon $G$ is $3w-2$, and also note here $w \ge 2$ since we have more than one vertex between $V$ and $W$ along the boundary of $G$. Now consider graph $B$. It has $n-(3w-2)$ vertices, and all of its vertices have even degree in $B$ except for vertices $V,W$ each of which has odd degree in $B$. Then (similar to the $n$ odd subcase of the "simple case" above) we can insert another vertex $N$ and edges from it to $V$ and $W$ to make a graph $B'$ which now has $n-(3w-2)+1=n-3w+3$ vertices. Note that since $w\ge 2$ we have that $B'$ has fewer vertices than does $G$, and that $B'$ satisfies the hypotesis of being a triangulated polygon with all vertices having even degree. We can then conclude that $n-3w+3$ is divisible by $3,$ and so $n$ is also a multiple of $3. This finishes the "non simple" case. I've looked at this for some time and it seems OK to me now. (I initially had some attempts at an answer which were not on the right track.) I'd appreciate any feedback, maybe improvements or even if someone notices a gap that needs filling.
Is annihilator of a subspace actually the space of inner products on orthogonal vectors
Well, finally I managed to prove it to be true and it adds a rather beautiful symmetry to the theory. Formally, my statement: Let's $\ T \in L(V)$ and $T'$ its dual map, let's also $U = \{\phi \in L(V,F):\ \phi = \langle\cdot,u\rangle\ \forall u \in (Range\ T)^\perp\}$, then $Null\ T' = (Range\ T)^0 = U$. The first part, $Null\ T' = (Range\ T)^0$, is proved in Axler's book (3.109). Then we proceed as follows: let's $u \in (Range\ T)^\perp$ then $\phi = \langle\cdot,u\rangle$ is equal $0$ for every vector in $Range\ T$, therefore $\phi \in (Range\ T)^0$. Hence $U \subset(Range\ T)^0$. On the other hand, if $\psi \in (Range\ T)^0$, then it equals $0$ for every vector from $Range\ T$ by definition. Then, by the Riesz Representation Theorem there exists and only one $u\in V$ such that $\psi = \langle\cdot,u\rangle$. But then this $u$ is orthogonal to every vector from $Range \ T$. Thus $(Range\ T)^0 \subset U$. Which concludes the proof. Analogous thing can be proven for $Range\ T' = \{\phi \in L(V,F):\ \phi = \langle\cdot,v\rangle\ \forall v \in (Null\ T)^\perp\} = (Null\ T')^0$. That all gives a really beautiful relation between abstract theory of dual spaces and the matrix theory, where for the latter another thing is usually taught: for any matrix $A$ it's true that $(Col\ A)^\perp = Null\ A^T$ and $(Col\ A^T)^\perp = Null\ A$.
The convergence of a sequence of sets
The down- and up-arrows imply the monotonicity of the sequence; the $A$ is the actual limit, which is the intersection for a non-increasing sequence and the union for a non-decreasing sequence. If $\langle A_n:n\in\Bbb N\rangle$ is monotone non-increasing, then its limit is $$A=\bigcap_{n\in\Bbb N}A_n\;,$$ and we write $$\langle A_n:n\in\Bbb N\rangle\downarrow A\;.$$ If the sequence is monotone non-decreasing, then its limit is $$A=\bigcup_{n\in\Bbb N}A_n\;,$$ and we write $$\langle A_n:n\in\Bbb N\rangle\uparrow A\;.$$
Prove: $n \in Z^{\geq 2}$, $f_nf_{n+1} - f_{n-1}f_{n+2} = (-1)^{n+1}$
Just keep using Fibonacci recursion for "large" numbers like $n+1,$ $n+2$ and $n+3:$ $$ \begin{aligned} f_{n+1}f_{n+2}-f_nf_{n+3}&amp;=f_{n+1}(f_{n+1}+f_n)-f_n(f_{n+2}+f_{n+1})\\ \\&amp;=f_{n+1}(f_{n+1}+f_n)-f_n(2f_{n+1}+f_n)\\ \\&amp;=(f_n+f_{n-1})(f_{n+1}+f_n)-f_n(2f_{n+1}+f_n)\\ \\&amp;=f_nf_{n+1}+f_n^2+f_{n-1}f_{n+1}+f_{n-1}f_n-2f_nf_{n+1}-f_n^2\\ \\&amp;=f_nf_{n+1}+f_{n-1}f_{n+1}+f_{n-1}f_n-2f_nf_{n+1}\\ \\&amp;=f_{n-1}(f_n+f_{n+1})-f_nf_{n+1}\\ \\&amp;=f_{n-1}f_{n+2}-f_nf_{n+1} \end{aligned} $$
Conditional probability with Gamma distribution.
By definition of conditional probability, $P(A|B) = P(A \ \text{and}\ B)/P(B)$.
Finding the trajectory of a charged particle in space in a magnetic field
Let's call $\omega=\frac{qB_0}m$. Then your equations of motion are: $$\begin{align}\frac{dv_x}{dt}&amp;=0\\\frac{dv_y}{dt}&amp;=\omega v_z\\\frac{dv_z}{dt}&amp;=-\omega v_y\end{align}$$ Note the sign on the last equation. The initial conditions for velocity can be written as $v_x(0)=v_0$, $v_y(0)=v_0$, and $v_z(0)=0$. The first equation gives $v_x(t)=v_0$, so $x(t)=v_0t$. Take the derivative of the second equation, and use $\frac{d v_z}{dt}$ from the third to get $$\frac{d^2v_y}{dt^2}+\omega^2 v_y=0$$ The solution is $$v_y(t)=A\sin(\omega t)+B\cos(\omega t)$$ To find out $A$ and $B$ you need the initial conditions. First $$v_y(0)=B=v_0$$ Then: $$\frac{d v_y}{dt}(t)=\omega A\cos(\omega t)-v_0\omega\sin(\omega t)$$ At $t=0$ you have $$\frac{dv_y}{dt}(0)=\omega A=\omega v_z(0)=0$$ So you get $$v_y(t)=v_0\cos(\omega t)$$ and $$v_z(t)=-v_0\sin(\omega t)$$ Now integrate with respect to time, to get the positions. Using the initial conditions will shift the center for $z$
A "generalized" exponential power series
$$\sum_{k=0}^\infty\frac{x^{k+a}}{(k+a)!}~=~e^x~\bigg[1-\dfrac{\Gamma(a,x)}{\Gamma(a)}\bigg].$$
Solve for closed form $a_j$ given $a_j = 1 + pa_{j+1} + (1-p)a_{j-1}$ where $a_1 = 0$.
The intuition of the hint is that from state $j$ you need to &quot;move to the left&quot; a net total of $j-1$ times to reach the absorbing state $1$, and the expected number of steps to move one unit to the left is independent of $j$ because the transition probabilities are. For an order-2 recurrence relation, you should be given two initial conditions, so I assume $a_2=1/(1-2p)$ in addition to $a_1=0$. Without being given or recognizing the hint, one approach is to use generating functions. Let $A(z)=\sum_{j=1}^\infty a_j z^j$. The recurrence relation for $j \ge 2$ implies that \begin{align} A(z) - a_1 z^1 &amp;= \sum_{j=2}^\infty \left(1 + p a_{j+1} + q a_{j-1}\right) z^j \\ &amp;= \sum_{j=2}^\infty z^j + \frac{p}{z} \sum_{j=2}^\infty a_{j+1} z^{j+1} + q z \sum_{j=2}^\infty a_{j-1} z^{j-1} \\ &amp;= \frac{z^2}{1-z} + \frac{p}{z} (A(z) - a_1 z^1 - a_2 z^2) + q z A(z) \\ &amp;= \frac{z^2}{1-z} - p a_2 z + \left(\frac{p}{z} + q z\right) A(z). \end{align} So \begin{align} A(z) &amp;= \frac{z^2/(1-z)-p a_2 z}{1-p/z-qz} \\ &amp;= \frac{z^3-p a_2 z^2(1-z)}{(1-z)(z-p-qz^2)} \\ &amp;= \frac{(1-2p)z^3-p z^2(1-z)}{(1-2p)(1-z)(z-p-(1-p)z^2)} \\ &amp;= \frac{z^2}{(1-2p)(1-z)^2} \\ &amp;= \frac{z^2}{1-2p}\sum_{j=0}\binom{j+1}{1}z^j \\ &amp;= \frac{1}{1-2p}\sum_{j=2}\binom{j-1}{1}z^j, \end{align} which implies that $$a_j=\frac{j-1}{1-2p}.$$
Cosine of a $2 \times 2$ non-diagonalisable matrix
When you do the Jordan decomposition, you get $A = SJS^{-1}$ with $$ S = \begin{pmatrix}1&amp;1\\1&amp;2\end{pmatrix}\quad\text{and}\quad J = \begin{pmatrix}\pi&amp;1\\0&amp;\pi\end{pmatrix}. $$ You find that $$ J^n = \begin{pmatrix}\pi^n&amp;n\pi^{n-1}\\0&amp;\pi^n\end{pmatrix}. $$ Since $A^n = SJ^nS^{-1}$, this implies \begin{align} \cos(A) &amp;= \sum_{n=0}^\infty \frac{(-1)^nA^{2n}}{(2n)!} = S\left(\sum_{n=0}^\infty \frac{(-1)^nJ^{2n}}{(2n)!}\right)S^{-1}\\ &amp;= S\left(\sum_{n=0}^\infty \frac{(-1)^n}{(2n)!}\begin{pmatrix}\pi^{2n}&amp;2n\pi^{2n-1}\\0&amp;\pi^{2n}\end{pmatrix}\right)S^{-1}\\ &amp;= S\begin{pmatrix}\cos(\pi)&amp;\sum_{n=0}^\infty\frac{2n\pi^{2n-1}}{(2n)!}\\0&amp;\cos(\pi)\end{pmatrix}S^{-1} = S\begin{pmatrix}-1&amp;\sum_{n=1}^\infty\frac{(-1)^n\pi^{2n-1}}{(2n-1)!}\\0&amp;-1\end{pmatrix}S^{-1}\\ &amp;= S\begin{pmatrix}-1&amp;-\sin(\pi)\\0&amp;-1\end{pmatrix}S^{-1} = -I_2. \end{align}
Do type constructors have type themselves?
To complete the previous answer, I think in general $\to$ itself won't have a type if we take it in a very strict sense. In the proof-assistants I know, $\to$ is hard-coded in the source, as something universally defined for all universes. This is done externally to the type theory, as a computation rule. However, you can define internally (here in Agda with HoTT, because it is what I am used to) a term $\verb!to!$ as a synonymous for $\to$ as follows : {- OPTIONS --without-K --rewriting -} open import HoTT module to where to : ∀ {i j : ULevel} → Type i → Type j → Type (lmax i j) to A B = A → B which translates basically to $\vdash\verb!to! : \Pi_{i,j}(U_i\to U_j \to U_{\operatorname{max} i, j})$. Note that this completely eliminates the circular dependency because the type of the (internal) $\verb!to!$ is now the (external) $\to$. And we always have $\verb!to!\ A\ B = A\to B$. So saying $\to$ has type $U_i\to U_j \to U_{\operatorname{max}i,j}$ is not completely wrong if you understand it as "$\to$ is completely equivalent to a term of type $U_i\to U_j \to U_{\operatorname{max}i,j}$", even if strictly speaking, $\to$ is probably not a term in itself, and thus does not have a type
Write the repeating decimal a = .162162162162... (where the ‘162’ continues forever) as a fraction p/q in lowest terms
Observe that $$\begin{align}0.162162162\ldots &amp;= 0.162 + 0.000162 + 0.000000162 + \cdots\\ &amp;= 0.162(1 + 0.001 + 0.000001 + \cdots)\\ &amp;= 0.162(1 + {1\over 1000^1} + {1\over 1000^2} + \cdots) \\ &amp;= 0.162\left[\sum_{n=0}^\infty\left({1\over 1000}\right)^n\right]\\ &amp;=0.162\left({1\over 1- 1/1000}\right) \\ &amp;={162/1000\over999/1000}\\ &amp;= {162\over999}\\&amp;= {6\over37}.\end{align}$$
Sequence of random variable converges to zero but expectation converges to infinity
Let $X_n(t)=n^{2}I_{(0,\frac 1 n)}$ on the space $(0,1)$ with Lebesgue measure. Then $X_n(t) \to 0$ for every $t$ and $EX_n=n$.
problem similar to random walk problems
This is a non-symetric, 1-dimensional random walk on $\mathbb{Z}$. Firts notice that $a$ is the probability of ever reaching $k+1$ whenever starting from $k$. Starting from $0$ you can reach $1$ in two distinct ways. First, you can go straight to $1$, this has probability $p$. Or, you can first go to $-1$ (with probability $1-p$), then eventually reach $0$ (with probability $a$, see above with $k=-1$) and then, starting in $0$, eventually reaching $1$ (again with probability $a$). As the steps are independent you may multiply the probabilities and write $$ a=p+(1-p)\,a^2. $$
A smooth nonzero function $\mathbb R\to\mathbb R$ with uniformly bounded derivatives tending to zero at infinity?
It turns out that there are a lot of these functions. To see this, one possibility is to use the theory of the Fourier transform: Choose some $\varphi\in C_{c}^{\infty}\left(\left(-\frac{1}{2\pi},\frac{1}{2\pi}\right)\right)$ and let $f:=\mathcal{F}\varphi=\widehat{\varphi}$ be the Fourier transform of $\varphi$, i.e. $$ f\left(\xi\right)=\int_{\mathbb{R}}\varphi\left(x\right)\cdot e^{-2\pi ix\xi}\,{\rm d}x. $$ By the usual properties of the Fourier transform, we conclude that $f\in\mathcal{S}\left(\mathbb{R}\right)$ is a Schwartz-function, which means $$ C_{m,n}:=\sup_{x\in\mathbb{R}}\left|x^{m}\cdot\partial^{n}f\left(x\right)\right|&lt;\infty $$ for all $m,n\in\mathbb{N}_{0}$. In particular, this implies (for $\left|x\right|\geq1$): $$ \left|\partial^{n}f\left(x\right)\right|=\left|\frac{x\cdot\partial^{n}f\left(x\right)}{x}\right|\leq\frac{C_{1,n}}{\left|x\right|}\xrightarrow[x\to\pm\infty]{}0 $$ so that $f$ fulfills the second one of your requirements. By choosing (e.g.) $\varphi\geq0$ and $\varphi\not\equiv0$, we can also ensure that $f\left(0\right)=\int_{\mathbb{R}}\varphi\left(x\right)\,{\rm d}x&gt;0$, i.e. $f\not\equiv0$ (alternatively, we have $\varphi=\mathcal{F}^{-1}f$, so that $f\not\equiv0$ holds as soon as $\varphi\not\equiv0$ is true). Finally, we can differentiate "under the integral sign" to get \begin{eqnarray*} \left|\partial^{n}f\left(\xi\right)\right| &amp; = &amp; \left|\int_{\mathbb{R}}\varphi\left(x\right)\cdot\frac{{\rm d}^{n}}{{\rm d}\xi^{n}}e^{-2\pi ix\xi}\,{\rm d}x\right|\\ &amp; = &amp; \left|\int_{\mathbb{R}}\varphi\left(x\right)\cdot\left(-2\pi ix\right)^{n}\cdot e^{-2\pi ix\xi}\,{\rm d}x\right|\\ &amp; \leq &amp; \int_{\mathbb{R}}\left|\varphi\left(x\right)\right|\cdot\left|2\pi x\right|^{n}\,{\rm d}x\\ &amp; \overset{{\rm supp}\left(\varphi\right)\subset\left(-\frac{1}{2\pi},\frac{1}{2\pi}\right)}{\leq} &amp; \int_{-\frac{1}{2\pi}}^{\frac{1}{2\pi}}\left|\varphi\left(x\right)\right|\cdot\left|2\pi x\right|^{n}\,{\rm d}x\\ &amp; \leq &amp; \int_{-\frac{1}{2\pi}}^{\frac{1}{2\pi}}\left|\varphi\left(x\right)\right|\,{\rm d}x\leq\left\Vert \varphi\right\Vert _{L^{1}}, \end{eqnarray*} so that the derivatives of $f$ are also uniformly bounded, which is the first of your desired properties.
If a matrix is not invertible, then is it that $N(A) \neq \varnothing$, or $N(A) \neq \{0\}$
You caught an error there. The null space is a subspace; thus always contains $\vec0$.
Find all $x$ such that $|4x^2 - 12x - 27|$ is prime
Hint: $|4x^2 - 12x - 27| = |(2x+3) \ (2x-9)|$ For it to be a prime number, one of the factors has to be $\pm1$ and absolute value of the other factor has to be the prime number. So find $x$ when, $2x+3 = \pm1$ or $2x - 9 = \pm1$ and check the value of the other factor. There are only $4$ possible values of $x$ to check.
Properties of differential equation without solving the equation
$$2f(x)\sin 2x+f'(x)\cos2x=1$$ Differentiate with respect to $x$: $$ 2f^{'}(x)\sin 2x + 4f(x)\cos 2x+f^{''}(x)\cos 2x -2f^{'}(x)\sin2x=0\to\\ 4f(x)\cos 2x + f^{''}(x)\cos 2x=0\to\\ (f^{''}(x)+4f(x))\cos 2x =0 $$ We already knew from initial equation that $f^{''}(x)+4f(x)=0$. Also, let us calculate value of expression at $x=0$: $$ 2f(0)\sin0+f^{'}(0)\cos0=1\to\\ 1=1 $$ We showed that derivatives of left and right side equal for $x\in\mathbb{R}$ and point where left and right side equal, hence they should be equal for all $x \in \mathbb{R}$.
Help with notation (primality testing and factorization course)
$\Bbb F_p$ refers to the field of $p$ elements, where $p$ is some prime. There is only one such field, which is also $\Bbb Z_p$. I.e., it can be identified with the set $\{0, 1,..., p-1\}$ with addition and multiplication modulo $p$. $\Bbb F_q$ is of course the same, except the prime defining the field is labeled $q$ instead of $p$. In $\Bbb F[x], \Bbb F$ represents some arbitrary field. For instance $\Bbb F$ could be $\Bbb R, \Bbb Q, \Bbb C, \Bbb F_2$, etc. $\Bbb F[x]$ is then the ring of all polynomials in the variable $x$ with coefficients in $\Bbb F$. Note that $\Bbb F[x]$ is not field, as only the non-zero constants have inverses. However, the notation $\Bbb F(x)$ denotes the set of rational functions in $x$ over $\Bbb F$, which is a field.
An example related to the Monotone Convergence Theorem
The monotone convergence theorem requires $f_1(x) \leq f_2(x) \leq \cdots$ for all $x$. But $$f_1(x) = \begin{cases} 1 &amp; \text{if } x \in [0, 1], \\ 0 &amp; \text{otherwise,} \end{cases}$$ and $$f_2(x) = \begin{cases}\frac12 &amp; \text{if } x \in [0, 2], \\ 0 &amp; \text{otherwise}\end{cases}$$ Notice that $f_1(0) = 1 \not\leq \frac12 = f_2(0)$. So the hypothesis fails.
Proof verification - working with integers
Taking from where you left off: $a^2 + 1 = ka \implies a^2-ka + 1 = 0 \implies \triangle = b^2-4ac = (-k)^2 - 4(1)(1) = k^2-4 = m^2 \implies (k-m)(k+m) = 4, m \in \mathbb{Z}$. Can you take it from here?
How to calculate$\sum\limits_{n=0}^{\infty}q^n\cos{(nθ)}$, $q\in\Bbb C$
You don't need to do all this to check whether a series is convergent or divergent. There is an easier method. $|cos(\theta)|&lt;=1$ ,$\forall \, \theta \in \mathbb R$. and as |q|&lt;1, the geometric progression converges. Use the comparison test. Hope this helps.
Prove that there exist constants $u,v$ such that $uA+vB$ is positive definite.
It isn't true. Counterexample: $$ A=\pmatrix{1&amp;0\\ 0&amp;-1},\ B=\pmatrix{0&amp;1\\ 1&amp;0}. $$ Clearly, $x^TAx=0$ if and only if $x=(t,\pm t)^T$, and $x^TBx=0$ if and only if $x=(t,0)^T$ or $(0,t)^T$. So, the only solution to $x^TAx=x^TBx=0$ is the zero vector. However, $uA+vB$ is never positive definite because it has a zero trace.
Why is the empty set a subset of every set?
Because every single element of $\emptyset$ is also an element of $X$. Or can you name an element of $\emptyset$ that is not an element of $X$?
Evaluate the integral $\int_c(3x-y)ds$
For part two I would use a parametrisation where $t$ starts at $0$ and goes to $\pi/2$, because we're travelling along a quarter of the edge of the circle. Then the parametrisation is given by $x(t) = 3\sin(t+\pi/4), y(t) = 3\cos(t+\pi/4)$. Hence $x'(t) = 3\cos(t+\pi/4)$ and $y'(t) = -3\sin(t+\pi/4)$. This segment of the curve can now be calculated by: \begin{align*} &amp;\int_0^{\frac{\pi}{2}}\big(3x(t)-y(t)\big)\sqrt{x'(t)^2+y'(t)^2}\ dt\\ =&amp; \int_0^{\frac{\pi}{2}}\big(9\sin(t+\pi/4)-3\cos(t+\pi/4)\big)\sqrt{(3\cos(t+\pi/4))^2+(-3\sin(t+\pi/4))^2}\ dt\\ =&amp; \int_0^{\frac{\pi}{2}}\big(9\sin(t+\pi/4)-3\cos(t+\pi/4)\big)\sqrt{9\cos^2(t+\pi/4)+9\sin^2(t+\pi/4)}\ dt\\ =&amp; \int_0^{\frac{\pi}{2}}\big(9\sin(t+\pi/4)-3\cos(t+\pi/4)\big)3\ dt\\ =&amp;\ 27\int_0^{\frac{\pi}{2}}\sin(t+\pi/4)\ dt-9\int_0^{\frac{\pi}{2}}\cos(t+\pi/4)\ dt\\ =&amp;\ 27\big[-\cos(t+\pi/4)\big]_{0}^{\pi/2}-9\big[\sin(t+\pi/4)\big]_{0}^{\pi/2}\\ =&amp;\ 29\sqrt{2} \end{align*} You can do the rest, combining it with the other integral and so on. Good luck!
Modelling random points on a circle
I see a fundamental problem with this approach. It is true that there are really only two sequences of three points on the circle: $A,B,C$ are encountered in that sequence either clockwise or counterclockwise. But when you assign numerical coordinates to the three points via the bijection between the circle and $[0,1)$, and then start dealing with arithmetic operations and relations among the three numbers, you change the topology of the problem. Here is an example. In one case you could have $A = 0.4,$ $B = 0.6.$ In another case you could have $A = 0.9,$ $B = 0.1.$ The two cases are isometric on the circle: $B$ is $0.2$ units counterclockwise from $A$. But numerically they are very different: $B - A = 0.2$ and $A &lt; B$ in one case, $B - A = -0.8$ and $B &lt; A$ in the other case. Perhaps the most obvious contradiction is, if $A,B,C$ truly are iid uniform on $[0,1)$, then there is a non-zero probability that $A &lt; C &lt; B,$ and likewise for three other cases that you have ruled out in the question. It is possible to exclude the other four cases from consideration, but to do so you have to give up the iid assumption and restrict the support of $(A,B,C)$ to a subset of the cube $[0,1)^3.$ The distribution of $(A,B,C)$ is then uniform on that subset. Then to compute probabilities correctly you have to understand the shape of that subset of the cube. I think this makes the problem much harder than it needs to be.
Differentiation to Find slope if tangent line Implicitly
When the derivative is taken properly you should get $y'=(4y-3x^2)/(3y^2-4x)$. Expanded it would be $3x^2+3y^2*y'-4xy'+4y=0$. By isolating the $y'$ and you get $y'(3y^2-4x)=4y-3x^2$. Divide both sides by $3y^2-4x$ and you get the proper derivative.
Prove that limit of function does not exist, if and only if sequence $f(s_n)$ is not convergent.
You have some fixed sequence $(s_n)$ such that $s_n\to c$. Then take any other sequence $(t_n)$ such that $t_n\to c$. By negating $(b)$ we know that $(f(s_n))$ and $(f(t_n))$ are both convergent, but the critical part is to notice that you don't know a priori that the limits are the same. If you knew that the limits are the same, you would prove that $f$ has a limit at $c$. To prove that $(f(s_n))$ and $(f(t_n))$ converge to the same limit, you consider $(u_n)$ defined as in the text. Obviously, $u_n\to c$. What you now know is that $(f(u_n))$, $(f(s_n))$ and $(f(t_n))$ are all convergent and that $(f(s_n))$ and $(f(t_n))$ are subsequences of $(f(u_n))$. Finally, any subsequence of a convergent sequence is convergent with the same limit. So, $$\lim_{n\to\infty}f(t_n)=\lim_{n\to\infty}f(u_n)=\lim_{n\to\infty}f(s_n).$$ Since $(t_n)$ was arbitrary, it means that for all sequences $t_n\to c$, $f(t_n)\to L$, where $L = \lim_nf(s_n)$. Therefore, $f$ has a limit at $c$.
Which definition is correct for a geometric random variable?
I would say both are correct, just try to be consistent and use only one. However, I found the one with the number of trials required more appealing, since $E[X]$ is simply $1/p$ in that case. PS: if you are given both $p = 0.25$ and $E[X] = 3$, then you can check, which definition is used: if this was the number-of-trials definition, you would get $$E[X] = 1/p = 1/0.25 = 4 \neq 3\text{,}$$ hence the number-of-failures definition was used.
Notation of graph with vertices & edges which belong to classes?
We usually say that the vertices are of 3 types, and describe them. The same with the edges. There is no reason to use a $V_{\{a,b,c\}}$ sort of notation, you just define $V$ as consisting of three types of objects.
How to identify continuity or discontinuity of an [Definite] integral?
To determine whether the integral exists or not you need to treat them as limits around the discontinuities in the integrand. The first one: $$\int_{-4}^4{dx\over x}$$ exists iff both integrals: $$\lim_{t\to 0}\int_{-4}^t{dx\over x},\quad \lim_{s\to 0}\int_s^4{dx\over x}$$ exist. But just check out the second one, the FTC gives this as: $$\lim_{s\to 0}\log 4-\log s\to\infty$$ Similarly for the second one you need that both limits: $$\lim_{t\to 0}\int_t^b{\ln x\over x}\,dx,\quad \lim_{s\to \infty}\int_b^s{\ln x\over x}\,dx$$ exist, but doing the first one and letting $u=\ln x\implies du={dx\over x}$ we are integrating $$\lim_{t\to 0}\int_{\log t}^{\log b}u\,du ={1\over 2}\log^2 b-{1\over 2}\log^2t\to \infty$$ so it also diverges. You could as easily check the second integral diverges as well, you can make the same $u$-substitution and get $$\lim_{s\to \infty}{1\over 2}\left(\log^2 s-\log^2 b\right)\to\infty$$ but only one of them needs to fail to exist to guarantee the whole integral fails to exist. Just by way of contrast, let's see that it is possible that discontinuities don't stop us from integrating. Take $$\int_0^1{dx\over\sqrt{x}}=\lim_{t\to 0}\int_t^1{dx\over x}$$ using the FTC we see this is: $$\lim_{t\to 0}2\sqrt{1}-2\sqrt t=2$$
Basis for this $\mathbb{P}_3$ subspace.
Another way to get to the answer: $P_3=\{{ax^3+bx^2+cx+d:a,b,c,d{\rm\ in\ }{\bf R}\}}$. For $p(x)=ax^3+bx^2+cx+d$ in $P_3$, $p(1)=0$ is $$a+b+c+d=0$$ So, you have a "system" of one linear equation in 4 unknowns. Presumably, you have learned how to find a basis for the vector space of all solutions to such a system, or, to put it another way, a basis for the nullspace of the matrix $$\pmatrix{1&amp;1&amp;1&amp;1\cr}$$ One such basis is $$\{{(1,-1,0,0),(1,0,-1,0),(1,0,0,-1)\}}$$ which corresponds to the answer $$\{{x^3-x^2,x^3-x,x^3-1\}}$$ one of an infinity of correct answers to the question.
If $A$ is densely defined symmetric operator, $\lambda-A$ is onto $X \implies \lambda \in \rho(A)$
One does not need closedness. Let $z=x+iy$, then we compute $$ \Vert (A-z)\phi \Vert^2 = \Vert (A-x)\phi\Vert^2 + y^2 \Vert \phi \Vert^2 + \langle (A-x)\phi , -iy \phi \rangle + \langle -i y \phi , (A-x) \phi \rangle $$ Now use the fact that $A$ is symmetric to show the last two terms cancel. Then you end up with $$ \Vert (A-z)\phi \Vert^2 \geq y^2 \Vert \phi \Vert^2.$$ Thus, if $\psi = (A-z)\phi$ you have $$ \Vert (A-z)^{-1}\psi \Vert \leq \frac{1}{\vert y \vert} \Vert \psi \Vert.$$ I.e. $(A-z)^{-1}$ is bounded.
When does a prime $p=m^2+n^2$ divide the term $ m^3+n^3-4$?
Sorry for waking up old post... Anyway, I think the following works. Since $p\mid m(m^2+n^2)$ and $p\mid n(m^2+n^2)$ we have $p|3(m^3+m^2n+n^2m+n^3)$ Since \begin{align}3(m^3+m^2n+n^2m+n^3)&amp;=(m+n)^3+2(m^3+n^3)\\&amp;=(m+n)^3+8+ 2(m^3+n^3-4)\end{align} we obtain $p|(m+n)^3+8$,while $$(m+n)^3+8=(m+n+2)(m^2+n^2+2mn-2m-2n+4)$$ Since $p$ is prime, it divides one of the factors. The case $p|(m+n+2)$ is easily dealt and gives $p=2,5$. The latter case reduces to $p|2(mn-m-n+2)$, and to $p|(mn-m-n+2)$ if $p\neq 2$. Now $m^2+n^2\leq mn-m-n+2$, but $m^2+n^2\geq 2|mn|$, so some simple comparison gives the only possibility $m=-2,n=-3,p=13$. Hence the answer $p=2,5,13$.
circle cuts $y=x^3-25x$ in six points
The circle is $$(x-a)^2+(y-b)^2-r^2=0.$$ Substituting $x^3-25x$ for $y$ gives a sextic for $x$. The sum of its roots will be the negative of its $x^5$-coefficient. What is that?
Confused between normal and binomial dist.
You can calculate the probability that (1) a fish weighs more than $1.4$kg, or (2) a fish weighs $1.4$kg or less, using the normal distribution If fish weights are independent, you can then use the binomial distribution to work out the probability that (1) $4$, $5$ or $6$ fish weigh more than $1.4$kg or (2) $0$ or $1$ fish weigh $1.4$kg or less. Note that the $\le$ in (b) should be $\lt$
How to solve this definite integral from fourier series?
Hint: $$\sin \left(\frac{m \pi x}{L}\right) \sin \left(\frac{n \pi x}{L}\right) = \frac 12 \left[\cos \left(\frac{(m+n) \pi x}{L}\right)+\cos \left(\frac{(m-n) \pi x}{L}\right)\right] $$