title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Is the Converse of the Cayley-Hamilton Theorem true or not?
No. If $p_F$ is the characteristic polynomial of $F$ and $q$ is any other polynomial, then $$P(F) = p_F(F) q(F) = 0 \cdot q(F) = 0.$$ So $P$ is a polynomial with $P(F) = 0$ that is not the characteristic polynomial of $F$ (just take $q$ to be nontrivial). We can also have something like the following happen: Let $$F = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}.$$ Then $$p_F(t) = (1 - t)^2.$$ But of course if $$q(t) = 1 - t,$$ we have $$q(F) = 0.$$
Negation of countable union of nowhere dense.
The second one is correct. To show that a space is of second category you have to show that if the space is expressed as a countable union of sets then the interior of closure of at least one of then is non-empty. The first one is false.
Harnack's inequality : In what it mean that all value are comparable?
Evans describes in what sense this can be understood: $u$ cannot be very small (or very large) at any point of $V$ unless $u$ is very small (or very large) everywhere. It is crucial that the $C$ depends on $V$ only, not on $u$, implying that a region, that is of certain regularity, hosts only harmonic functions that are of certain comparability. The inequality can obviously only be stated for nonnegative functions. You would run into problems if e.g. $\inf u < 0$ and $\sup u > 0$ However for a (bounded) harmonic $u$ you can always look at some $\tilde{u} = u + C > 0$, uses Harnack's inequality on $\tilde{u}$ and hope that this is useful for your stuff you want to do with your original $u$
rank of power a linear map $f:V\rightarrow V$ of a vector space V
First, you know that $\operatorname{rank}(f^{3n})=\operatorname{rank}(f)$ for all $n$. For any $n\geq k$, we have $\operatorname{rank}(f^n)\leq \operatorname{rank}(f^k)$. Then you have a chain of inequalities: $$ \operatorname{rank}(f)= \operatorname{rank}(f^{3n})\leq \operatorname{rank}(f^{3n+1})\leq \operatorname{rank}(f^{3n+2})\leq\operatorname{rank}(f^{3n+3})=\operatorname{rank}(f). $$
An example that $f,g$ is Riemann integrable on $[a,b]$, the range of $f$ is $[a,b]$, but $g\circ f$ is not Riemann integrable.
The classic example, as you have stated, involves a Riemann function and a jump discontinuity at zero. To preserve these key features, we could define $f:[0,1]\to[0,1]$ $$ f(x)=\begin{cases}0 & x=1 \\ 1-x & 0<x<1 \\ 1 & x=1\end{cases} $$ This is a bijection which still causes $f\circ g$ to behave like the Dirichlet function. The discontinuity (and thus non-monotonicity) is necessary, since the composition with a continuous function will always be Riemann integrable.
Why does $\log_{4}32 \neq \log _{4}(4 \cdot 8)$
Because $\log_4(8) \neq 2$. Instead, $4^2=16$.
solving a trigonometric equation Cos(x+30°)=cos(x+60°)
Hint Notice that (and we can see it on the unit trigonometric circle) $$\cos a=\cos b\iff a=b+2k\pi\quad\text{or}\quad a=-b+2k\pi,\quad k\in\Bbb Z$$ can you take it from here?
Why does this covariance matrix have additional symmetry along the anti-diagonals?
You've found a typo. (Good eye!) Note, however, that every covariance matrix is symmetric, including, of course, the corrected version of the one you reference. This follows simply from the fact that the covariance matrix is defined as $$ \mathbb E\big((\mathbf X - \mu)(\mathbf X - \mu)^T\big) $$ for a random vector $\mathbf X \in \mathbb R^n$ with mean vector $\mu$ which satisfies the necessary moment conditions. In this particular case, the text should have read $$ \mathbf C(\mathbf z) = \left(\begin{matrix}1 & a \\ a & a^2 + \sigma_n^2\end{matrix}\right) \> . \tag{4.46} $$
Generalization of Radon-Nikodym Theorem
Recall that $|\mu|$, the total variation of $\mu$ is a finite measure, $\mu << |\mu|$ and $$|\frac{d\mu}{d|\mu|}| = 1 \text{(i.e. invertible in $\mathbb{C}$, $|\mu|$-a.e.)}$$ $$\int_A f d\mu = \int_A f \frac{d\mu}{d|\mu|}d|\mu|$$ holds when $f$ is $|\mu|$-integrable. Now let $$g=\frac{d\nu}{d|\mu|}(\frac{d\mu}{d|\mu|})^{-1}$$ and calculate the integration of $g$. We see that $$\int_A gd\mu = \int_A \frac{d\nu}{d|\mu|}(\frac{d\mu}{d|\mu|})^{-1}\frac{d\mu}{d|\mu|}d|\mu| = \nu(A)$$ so $g$ is the Radon-Nikodym derivative. Note that $g$ is unique up to $|\mu|$-a.e., since if $g$ and $h$ are both Radon-Nikodym derivative, $\int_A (g-f)\frac{d\mu}{d|\mu|}d|\mu| = 0$ for all $A$, then $(g-f)\frac{d\mu}{d|\mu|} = 0$, so $g-f=0$, $|\mu|$-a.e. .
Analysis.. Convergence of sequence
1) Recall that if $f(n)=p(n)/q(n)$ is a rational fonction with numerator and denominator of same degree, then the limit at $+\infty$ exists and is the quotient of the leading coefficients. So $$ \lim_{n\rightarrow+\infty}\frac{7n+3}{2n+1}=\frac{7}{2}\quad\mbox{and}\quad\lim_{n\rightarrow+\infty}\frac{n+1}{n}=1. $$ Now by continuity of the square root at $1$, the square root of the latter tends to $\sqrt{1}=1$. So $x_n$ converges coordinatewise, which is equivalent to convergence in finite dimension. Hence $$ \lim_{n\rightarrow+\infty}x_n=\left(\frac{7}{2},1\right). $$ 2) The second coordinate is $(-1)^n+7/n$. The subsequence of even indices is $1+7/(2n)$ and converges to $1$, while the subsequence of odd indices is $-1+7/(2n+1)$ and converges to $-1$. So this sequence does not converge. A fortiori $x_n$ does not converge. 3) This sequence of functions converges uniformly to $$ x(t)=t $$ on $[0,1]$. To see this, compute $x_n(t)-t$ and simplify. You will find: $$ x_n(t)-t=\log\left(1+\frac{t}{n}\right)-\frac{t}{1+nt}. $$ Now $$ |x_n(t)-t|\leq |\log\left(1+\frac{t}{n}\right)|+\frac{t}{1+nt}\leq\frac{t}{n}+\frac{t}{nt}\leq \frac{2}{n} $$ for all $t\in(0,1]$ first, and then of course for $t=0$. So $$ \|x_n-x\|_\infty\leq \frac{2}{n}. $$ Since the rhs tends to $0$, the squeeze theorem now shows that $x_n$ converges uniformly to $x$ on $[0,1]$.
Fiber of associated G-bundle
This follows from the fact that the action of $G$ is free, let $p:P\rightarrow M$ the projection of the principal bundle. Denote by $p_F:P_F\rightarrow M$ the projection of the associated bundle, that is the quotient of $P\times F$. Let $m\in M$, and $y\in p^{-1}(m)$. Let $y,y'\in F$, we denote by $[x,y], [x,y']$ the image of $[x,y]$ and $[x,y']$ in $P\times F/\simeq$; $p_F([x,y])=p_F([x,y'])$ if and only if there exists $g\in G$ such that $g.(x,y)=(g.x,g.y)=(g.x,g.y)=(x,y')$. This implies that $g=Id_G$ and $y=y'$. Thus the map $F\rightarrow p_F^{-1}(m)$ defined by $y\rightarrow [x,y]$ is injective. It is also surjective since $G$ acts transitively on $p^{-1}(x)$. For every $x'\in p^{-1}(m), x'=g.x, [x',y]=[x,g^{-1}.y]$.
Find the number of ways giving name tags such that there exist a student who don't exit the table after 4 operations.
This doesn't get as far as you probably want, but it may help somebody. Let the number of students after one round be $p$, after two rounds be $q$, three rounds be $r$, and four rounds be $s$. At the end of round three we have $r-s$ students with their own nametags and $s$ students with a derangement of theirs. The derangement can be chosen in (the closest natural to) $\frac {s!}e=\lfloor \frac {s!}e+\frac 12 \rfloor$ ways. We can select the students that leave in ${n \choose p}{p \choose q}{q \choose r}{r \choose s}$ ways, so the final result is $$\sum_{p=2}^n\sum_{q=2}^p\sum_{r=2}^q\sum_{s=2}^r {n \choose p}{p \choose q}{q \choose r}{r \choose s}\left \lfloor \frac {s!}e+\frac 12 \right \rfloor$$ where the sum starts from $2$ because you can't have just one student left. The last term in the sum is zero for $s=1$
frequency of 'special-prime' gaps
Let the primes be $P,Q,10P+a,10Q+b$. If $P\neq Q$, then all four numbers must be prime. If $P=Q$, only three numbers must be prime. That gives an advantage to single-digit differences. $a$ and $b$ must be 1,3,7 or 9. If 1 or 7 is involved, then $P$ or $Q$ cannot be one less than a multiple of 3. That gives an advantage to $3$ and $9$. So a difference of six, which is $(10P+9)-(10P+3)$, has these advantages.
An infinite sequence a for which prime divisors of $a_i^2+1$ are in the set $S$
Given a finite set of primes $S$, there are at most finitely many relatively prime $S$-units whose sum is a perfect square (or, more generally, a perfect power). Such a result follows from bounds for linear forms in logarithms (this is certainly in the book of Shorey and Tijdeman). This implies a negative answer to your question. A straightforward way to see this in your case would be to note that every $S$-unit can be written as $ab^3$ where $a$ is cube-free and there are precisely $3^{\#S}$ choices for $a$. Your problem can thus be reduced to finding the ``integral points'' on the (finite) collection of Mordell curves of the shape $$ Y^2 = X^3 - a^2. $$
Non-abelian fundamental group on a path-connected space
The whole Section 1.2 of Hatcher's book is dedicated to van Kampen's theorem, where nonabelian fundamental groups are abundant. The more-or-less canonical example of the wedge sum $S^1 \vee S^1$ is in particular described, and I find the explanation in the introduction of the section rather clear. You have two loops, $a$ and $b$, and it's not possible to find a homotopy that preserves endpoints between $ab$ and $ba$; just try it. PS: There are tons of examples in this section, and it starts two pages after the exercise you mention, so... Maybe next time try to read a little further.
Polynomials with integer coefficients
Suppose there are integers $r,s$ such that $rs=255, \ r+s=1253$. From $rs=255$ we get that both $r$ and $s$ are odd and from $r+s=1253$ a contradiction.
Show: $\lnot F \rightarrow G, F \rightarrow H \vdash G ∨ H$
Hint: This could be one approach of the proof: See if you can fill the missing steps $$\def\fitch#1#2{\quad\begin{array}{|l}#1\\\hline#2\end{array}} \fitch{~1.~~\neg F\to G\\~2.~~F\to H}{\fitch{~3.~~\neg(F\lor\neg F)}{\fitch{~4.~~F}{~5.~~F\lor\neg F&\lor\text{Intro}~4\\~6.~~\bot&\bot\text{Intro}~3,5}\\~7.\\\fitch{~8.~~\neg F}{~9.~~F\lor\neg F&\lor\text{Intro}~8\\~10.~~\bot&\bot\text{Intro}~3,9}\\~11.\\~12.}\\~13.\\\fitch{~14.~~F}{~15.~~H&\to\text{Elim}~2,14\\~16.~~G\lor H&\lor\text{Intro}~15}\\\fitch{~17.~~\neg F}{~18.~~G&\to\text{Elim}~1,17\\~19.~~G\lor H&\lor\text{Intro}~18}\\~20.}$$
Show $N(0, a^2) \cdot c = N(0, a^2c^2)$
If $\xi \sim N(0, a^{2})$ then $c\xi$ has normal distribution with mean $0$ and variance $E(c^{2}\xi^{2})=c^{2} var(\xi)=c^{2}a^{2}$. Hence $c \xi \sim N(0, c^{2}a^{2})$.
why do we use limits approaches infinity to find horizontal asymptotes if it can't do some functions like sin(x)?
Because $y=\pm 1$ are not asymptotes of $\sin x$. An asymptote of a function $f:\mathbb R\rightarrow\mathbb R$ is a line $y=mx+q$ such that $$\lim_{x\rightarrow\infty}(f(x)-mx-q)=0$$ this also gives the formulas for $m,q$, that is $q=\lim_{x\rightarrow\infty}(f(x)-mx)$ and $m=\lim_{x\rightarrow\infty}\frac{f(x)}{x}$. (the $\infty$ is either $+\infty$ or $-\infty$, a function may have no asymptote, like $\sin x$, the same asymptote for $\pm\infty$, like $\frac{1}{x}$ or two different asymptote for $\pm\infty$, like $\arctan x$). No such line exists for $\sin x$ (you can use the formulas above for $m,q$ and try to find the values for $f(x)=\sin x$, you'll find out that the limit for $q$ doesn't exists)
Number of parallelpipeds
First of all - choose 4 vertices in ${8\choose 4}$ ways Exclude situations, when all selected vertices are on the same side ($6$ ways) or on two opposite (not lying on the same side) parallel edges ($6$ ways) The final answer is then $$n={8\choose 4} - 12 = 58$$
Calculate area and perimeter
By drawing a reasonable amount of auxiliary lines and circles it is pretty clear the area of the shaded region equals the area of an equilateral triangle with side length $2$, hence the wanted area is $\color{red}{\sqrt{3}}$. With an integral: $$ A = 2\int_{-2}^{1}\left|\sqrt{3}-\sqrt{4-x^2}\right|\,dx = \color{red}{\sqrt{3}}. $$
Show that $\lim_{\varepsilon\rightarrow0}\int_{|\theta|=\varepsilon}f(r\theta)\varepsilon^{1-n}~d\theta=f(0)\omega_{n-1}$
You should use Lebesgue Dominated Convergence Theorem in the way that $\displaystyle\int_{|\theta|=1}\|f\|_{L^{\infty}({\bf{R}}^{n})}d\theta=\|f\|_{L^{\infty}({\bf{R}}^{n})}\omega_{n-1}<\infty$, where the constant function $\|f\|_{L^{\infty}({\bf{R}}^{n})}$ serves as an integrable dominant function for $|f(\epsilon r\theta)|$, writing that $\displaystyle\int_{|\theta|=1}|f(\epsilon r\theta)|d\theta<\infty$ is misleading.
length of vector after multiplying it by a unit vector
Remember the formula $w_t \cdot w^* = \|w_t\|\|w^*\|\cos(\theta)$. We know that $\|w^*\|=1$ and $-1 \le \cos(\theta) \le 1$, so the result follows.
My result doesn't agree with results from online calculators
The online calculators assume nonnegativity. If you write $x$ as the difference of two nonnegative numbers, your first calculator gives the right answer Minimize p = 4x1-4x2-3y subject to x1-x2+y <= 1 -x1+x2+y <= 1
Behavior of $\sum a_n x^n$ given $\sum |a_n - a_{n-1}| < \infty$
Let $K=\sum_{n=1}^{\infty}|a_n-a_{n-1}|.$ For $n\geq 1$ we have $$|a_n|\leq |a_0|+|a_n-a_0|= |a_0|+|\sum_{j=0}^{n-1}(a_j-a_{j+1})|\leq |a_0|+\sum_{j=0}^{n-1}|a_j-a_{j+1}| \leq |a_0|+K.$$ So $\sup \{|a_n|:n\geq 0\}\leq |a_0|+K&lt;\infty.$ Therefore $\sum_{n=0}^{\infty} a_nx^n $ converges whenever $|x|&lt;1,$ by comparison to the (absolutely convergent) geometric series $\sum_{n=0}^{\infty}(|a_0|+K)|x|^n.$ The correct answer is #3. Case #4 is false if $a_n=0$ for every $n$ .(Case #4 is also false if $a_n=1/n!$ for every $n$, when the power series converges for all $x.$) Case #2 is false if $a_n=1$ for every $n,$ and $x=1$. (Case #2 is also false if $a_n=1+\frac {1}{n+1}$ for every $n,$ and $|x|\geq 1.$)
interval has one element {how to write it} symbol
It is not wrong. I wouldn't use it as the go-to notation for a singleton (for which I would strongly recommend $\{a\}$), but if you are in a context where you're mainly dealing with intervals of the form $[a,b]$ for $a&lt;b$ and you want to extend it to $a\le b$, I think no one can argue with it. The definition $$[a,b]=\{x\in\Bbb R\,:\, a\le x\le b\}$$ needs not be modified. Added: On a side note, I would refrain from making a big deal out of the identities $$\forall a&gt;b,\ (a,a)=(a,a]=(a,b)=[a,b]=\emptyset$$
How to prove that adding elements to a set does not affect its satisfiability?
See Ben-Ari, page 29: Let $A$ a formula: $A$ is satisfiable iff $v_I (A) = \text{T}$ for some interpretation $I$ [where $v_I(A)$ is the truth-vale of $A$ under $I$]. Thus, if $A$ is a clause that is satisfiable, like e.g. $p \lor q$, this means that there is an interpretation $I$ such that $v_I(p \lor q) = \text{T}$. Recall that [page 16] : An interpretation for a formula $A$ is a total function $I : P_A \to \{ \text T, \text F \}$ [where $P_A$ is the set of atoms appearing in $A$; in the case above : $P_A = \{ p,q \}$] that assigns one of the truth values $\text T$ or $\text F$ to every atom in $P_A$. If we get a new unit clause [page 77 : a literal] $r$, we have that $r \notin P_A$ and thus the interprettaion $I$ is not defined for it. Thus, we have to extend $I$ to $I' : \{ p,q,r \} \to \{ \text T, \text F \}$. The new set of clauses is : $\{ \{p, q, \lnot r \}, \{ r \} \}$; obviously, we need $I'(r)= \text T$. But the fact that $v_{I'}(\lnot r)= \text F$ does not affect the satisfiabiliy of $p \lor q \lor \lnot r$. Conclusion: having found an interpretation $I'$ that satisfy the new set of clauses, we have proved tha fact that the "procedure" described above does not affect satisfiability.
Is $\Sigma$ a topology on $X$?
There is no largest open disk. For example, if the radii are 1 - 1/n, 0 &lt; n, none of them is the largest. Let r be the supremum of the radii. Show the open ball of radius r is the union of the balls.
Show that $\sigma(a)\ne a,\forall\sigma\in G-\{1\}$ and all $a\in A$.where $G$ is abelian, transitive subgroup of $S_A$
First thing, $S_A$ certainly doesn't have a unique transitive abelian subgroup in general (even up to conjugation). Any abelian group of order $|A|$ is in fact isomorphic to a transitive abelian subgroup of $S_A$. For example $C_2\times C_2\cong\langle (1,2)(3,4),(1,3)(2,4)\rangle$ and $C_4\cong\langle (1,2,3,4)\rangle$. Now suppose $\sigma(a)=a$. Since $G$ is transitive, for any $b\in S$ there is some $\rho\in G$ with $b=\rho(a)$. What is $\rho\sigma\rho^{-1}(b)$? Since $G$ is abelian and the above is true for all $b\in A$ what does this tell you about $\sigma$?
Is there a number that using the rules of Collatz conjecture's variation $3n-1$ doesn't get to $1, 7$ or $17$?
Your function (let's call it $g: \mathbb Z \rightarrow \mathbb Z$) is just $g(x)=-f(-x)$ with $f$ being the normal Collatz function. Your cycles are known, they correspond to $\{-1\rightarrow-2\}$, $\{-5\rightarrow-14\rightarrow-7\rightarrow-20\rightarrow-10\}$ and $\{-17\rightarrow-50\rightarrow-25\rightarrow-74\rightarrow-37\rightarrow-110\rightarrow-55\rightarrow-164\rightarrow-82\rightarrow-41\rightarrow-122\rightarrow-61\rightarrow-182\rightarrow-91\rightarrow-272\rightarrow-136\rightarrow-68\rightarrow-34\}$ for $f$. It is unknown whether the Collatz hypothesis is true on the naturals, even less is known about extensions. See also https://en.wikipedia.org/wiki/Collatz_conjecture#Iterating_on_all_integers
Prove that this sequence diverges to infinity
I think that I have a solution, but not elementary. I put $A=1+i\sqrt{2}$, $b=1-i\sqrt{2}$. Note that $a_n\in \mathbb{Z}$ for all $n$. Now suppose that $|a_n|$ does not go to $\infty$. There exists then $M&gt;0$ such that the set $\{n; |a_n|\leq M\}$ is infinite, and as a consequence, there exist $L\in \mathbb{Z}$ such that $a_n=L$ for an infinity of $n$. This means that the recurrent sequence $a_n-L$ take the value $0$ for an infinity of $n$. Now the non elementary fact: The Skolem-Mahler-Lerch Theorem: see https://en.wikipedia.org/wiki/Skolem%E2%80%93Mahler%E2%80%93Lech_theorem This theorem say that there exist an arithmetic progression of such $n$: there exist $d\geq 1$, and $r$, such that $a_{kd+r}-L=0$, or $A^{kd+r}+B^{kd+r}-2L=0$ for all $k$. Now take $k=0,1,2$. The linear system with $3$ unknown $x_1,x_2,x_3$ given by $A^{kd}x_1+B^{kd}x_2+x_3=0$, $k=0,1,2$ admit then a non zero solution $(A^{r},B^{r}, -2L)$. Hence the determinant of this system (this is a Van der Monde determinant) is zero. As $d\geq 1$, $A^d\not =1$, and $B^d\not =1$, this gives that we must have $A^d=B^d$. And to finish, Let $R=\mathbb{Z}[i\sqrt{2}]$. This is an Euclidean ring for the norm, and in $R$, $A$ and $B$ are two non associated primes. And we have unicity of the decomposition in prime, hence $A^d=B^d$ is the final contradiction.
Reducing a matrix to upper Hessenberg form using Householder transformations in Matlab
Step by step: 1) Transforming a matrix to the upper Hessenberg form means we want to introduce some zeros in the columns $1,\ldots,n-2$. So why a loop over $1,\ldots,n$? Replace for k = 1 : n by for k = 1 : n - 2 2) Transforming a matrix to the upper Hessenberg form also means that we want to zero out components $k+2,\ldots, n$ in the given column $k$ (and not $k+1,\ldots,n$). Also, there is no need to introduce two vectors. Hence replace x = Q2D(k:n, k); by v = Q2D(k+1:n, k); 3) I do not understand the meaning of e = zeros(n-k+1,1); e(1) = 0; It results in the zero vector. 4) There is no need to use any vector $e$ as it contains only one nonzero component and adding/subtracting it to/from a vector does mostly nothing except changing one single component. Replace if sign(x(1)) == 0 v = norm(x)*e + x; else v = sign(x(1))*norm(x)*e + x; end by alpha = -norm(v); if (v(1) &lt; 0) alpha = -alpha; end v(1) = v(1) - alpha; Note that $\alpha$ is the number we should obtain in the entry $(k+1,k)$. 5) Normalisation is OK. Leave v = v / norm(v); untouched :-). 6) Replace Q2D(k:n,k:n) = Q2D(k:n,k:n) - 2 * v * (v.' * Q2D(k:n,k:n)); by Q2D(k+1:n,k:n) = Q2D(k+1:n,k:n) - 2 * v * (v.' * Q2D(k+1:n,k:n)); According to point (2), we do not triangularise the matrix! Also, you can use the fact that actually you know what exactly should be in the column $k$ and use this instead: Q2D(k+1:n,k+1:n) = Q2D(k+1:n,k+1:n) - 2 * v * (v.' * Q2D(k+1:n,k+1:n)); Q2D(k+1,k) = alpha; Q2D(k+2:n,k) = 0; This in particular avoids creating tiny nonzeros (due to roundoff) in entries which should be exactly zero. 7) The line Q2D(1:n,k+1:n) = Q2D(1:n,k+1:n)-2 * (Q2D(1:n,k+1:n) * v) * v.'; % This line is where % I'm having issues is actually correct! So this is what remains at the end: for k = 1 : n - 2 v = Q2D(k+1:n,k); alpha = -norm(v); if (v(1) &lt; 0) alpha = -alpha; end v(1) = v(1) - alpha; v = v / norm(v); Q2D(k+1:n,k+1:n) = Q2D(k+1:n,k+1:n) - 2 * v * (v.' * Q2D(k+1:n,k+1:n)); Q2D(k+1,k) = alpha; Q2D(k+2:n,k) = 0; Q2D(1:n,k+1:n) = Q2D(1:n,k+1:n) - 2 * (Q2D(1:n,k+1:n) * v) * v.'; end
Explain why $E(X) = \int_0^\infty (1-F_X (t)) \, dt$ for every nonnegative random variable $X$
For every nonnegative random variable $X$, whether discrete or continuous or a mix of these, $$ X=\int_0^X\mathrm dt=\int_0^{+\infty}\mathbf 1_{X\gt t}\,\mathrm dt=\int_0^{+\infty}\mathbf 1_{X\geqslant t}\,\mathrm dt, $$ hence $$ \mathrm E(X)=\int_0^{+\infty}\mathrm P(X\gt t)\,\mathrm dt=\int_0^{+\infty}\mathrm P(X\geqslant t)\,\mathrm dt. $$ Likewise, for every $p&gt;0$, $$ X^p=\int_0^Xp\,t^{p-1}\,\mathrm dt=\int_0^{+\infty}\mathbf 1_{X\gt t}\,p\,t^{p-1}\,\mathrm dt=\int_0^{+\infty}\mathbf 1_{X\geqslant t}\,p\,t^{p-1}\,\mathrm dt, $$ hence $$ \mathrm E(X^p)=\int_0^{+\infty}p\,t^{p-1}\,\mathrm P(X\gt t)\,\mathrm dt=\int_0^{+\infty}p\,t^{p-1}\,\mathrm P(X\geqslant t)\,\mathrm dt. $$
Proving that a function is Riemann non-integrable
For every partition, produce a finer partition an take all the $c_i$ to be rational, then take the same refinement partition and take all $c_i$ irrational. One sum will be $0$ the other $1$.
Prove that - for every positive $x \in \mathbb{Q}$, there exists positive $y \in \mathbb{Q}$ for which $y \lt x$
My first observation is that you’re getting badly bogged down in symbols. For starters, there is absolutely no reason to replace the clear statement of the problem with the symbolic expression $\forall x \in \mathbb{Q}_{\gt0} \ \exists y \in \mathbb{Q}_{\gt0}, \ y \lt x$; that’s just introducing unnecessary obstacles for the reader. The same goes for your argument. Both it and its shortcomings would be much more easily read if you wrote it out in words, like this: Suppose that that every positive rational $y$ is greater than or equal to $x$. Then if $y&gt;0$ is rational, $y\ge x$. By taking the contrapositive it follows that if $y&lt;x$, then $y$ is not a positive rational. Without the fancy symbols to get in the way there’s a question that should almost leap out at you: what is $x$? Nowhere have you given any indication. And since you haven’t, what can it possibly mean to suppose that $y\ge x$ for all positive rationals $y$? Back up now and think again about the actual statement: for each positive $x\in\Bbb Q$ there is a $y\in\Bbb Q$ such that $y&lt;x$. Look at a few examples. If $x=7$, I can take $y=6$, for instance. If $x=6$, I can take $y=5$. If $x=3/2$, I can take $y=1/2$. In fact, no matter what positive rational $x$ may be, $x-1$ is a rational number less than $x$. I’m done: I’ve proved the statement by providing a recipe for finding a suitable $y$ given $x$. And even then I’m working too hard. Is there any rational number that is less than all positive rational numbers? Sure: $0$, or for that matter any negative rational number. Now I’ve proved an even stronger statement: there is a $y\in\Bbb Q$ such that $y&lt;x$ for each positive $x\in\Bbb Q$. If you insist on looking at quantifiers, this is $$\exists y\in\Bbb Q\forall x\in\Bbb Q_{&gt;0}(y&lt;x)\;.$$ Here’s an exercise for you to try: prove that for each positive $x\in\Bbb Q$ there is a positive $y\in\Bbb Q$ such that $y&lt;x$. HINT: An idea something like my first argument works. A mathematical proof is a piece of expository prose. Its purpose is to convince the reader that the theorem is true. Obviously it should be mathematically correct and logically sound, but it should also be clear and easy to follow. By all means use symbols when they’re appropriate: the quadratic formula is much easier to follow when expressed symbolically than when written out in words! But don’t fall into the trap of thinking that the more symbolism you use, the more professional your argument looks.
What is the smallest prime of the form $2n^n+91$?
$$2 \times 1949^{1949} + 91$$ is probably prime! (Running a rigorous primality test on it would take almost a whole day $-$ see here, for instance $-$ so I'm not going to do that.) $2n^n+91$ is composite for all lower values of $n$.
Integral, set and parametric representation
Try $$ u=x-z\quad v=2y\quad w=1-z $$ New domain will be $\{(u,v,w)\in\mathbb{R}^3:u^2+v^2&lt;w^2, 0&lt;w&lt;1\}$. Jacobian is also easy to compute.
Implicit differentiation with sin functions
When you do implicit differentiation problems, there are three important things to keep in mind. Whenever you have a mixture of $x$ and $y$ factors, you must use the product/quotient rule. Whenever you differentiate a term involving $y$, you must include a factor of $y^\prime$ (since we are differentiating with respect to $x$, not $y$). If you did not differentiate a factor of $y$, you do not include a factor of $y^\prime$. Keeping these things in mind, I get $$ 1 \cdot y^\prime + (\cos(y) - x\sin(y)y^\prime) = (2xy + x^2\cdot 1 \cdot y^\prime), $$ where I've included parentheses to show where product rule is taking place. Now, the whole point of this business was to get $y^\prime$ by itself. So, move everything having to do with $y^\prime$ to one side of the equation and all other terms to the other. $$ y^\prime - x\sin(y)y^\prime - x^2y^\prime = 2xy - \cos(y). $$ Factoring out the $y^\prime$ gives $$ y^\prime(1 - x\sin(y) - x^2) = 2xy - \cos(y). $$ Finally, dividing to isolate $y^\prime$ leaves us with $$ y^\prime = \frac{2xy - \cos(y)}{1 - x\sin(y) - x^2}. $$
Real part of function vanishes
They look like circles, but they are not. If you look very closely, they are a little bit flatened at their vertical extremes. I will present a far more convincing mathematical argument below. Note that $$ f(z) = e^{-Im(z)i} + e^{-Im(z\bar\omega)i} + e^{-Im(z\omega)i}. $$ Then translate it to cartesian coordinates, obtaining $$ f(a+bi) = e^{-bi} + e^{\left(\frac{a\sqrt3}{2} + \frac{b}{2}\right)i} + e^{\left(-\frac{a\sqrt3}{2} + \frac{b}{2}\right)i}. $$ Then, the real part will be \begin{align*} Re(f(a+bi)) &amp; = \cos(-b) + \cos\left(\frac{a\sqrt3}{2} + \frac{b}{2}\right) + \cos\left(-\frac{a\sqrt3}{2} + \frac{b}{2}\right) \\ &amp; = \cos(b) + 2\cos\left(\frac{a\sqrt3}{2}\right)\cos\left(\frac{b}{2}\right). \end{align*} So we are looking for $(a,b)\in\mathbb R^2$ such that $$ \cos(b) + 2\cos\left(\frac{a\sqrt3}{2}\right)\cos\left(\frac{b}{2}\right)=0. $$ If we solve it for $a=0$, according to Wolfram alpha, we get $$ b = 4\left(\pi n \pm \arctan\left(\sqrt{2\sqrt{3}-3}\right)\right) $$ where the solutions closest to $0$ are $$ b^{\pm} = \pm 4\arctan\left(\sqrt{2\sqrt{3}-3}\right). $$ If we solve it for $b=0$, according to Wolfram alpha, we get $$ a = \frac{4(3\pi n \pm \pi)}{3\sqrt{3}} $$ where the solutions closest to $0$ are $$ a^{\pm} = \pm \frac{4\pi}{3\sqrt{3}}. $$ By the plot, it seems that the points $(a^{\pm},0)$, $(0,b^{\pm})$ are in a same circumpherence. Let us see that they are not. Suppose the points $(a^{\pm},0)$ and $(0,b^{\pm})$ are in a circumpherence $C$ of equation $(x-x_0)^2 + (y-y_0)^2 = r^2$ (a priori, we are not even assuming that $C$ is centered at the origin). Using that $(a^{\pm},0)$ are in $C$ and $a^-=-a^+\neq0$, we get $$ \left\{\begin{array}{l} (a^+-x_0)^2 + y_0^2 = r^2 \\ (-a^+-x_0)^2 + y_0^2 = r^2\end{array}\right. \Rightarrow a^+x_0 = 0 \Rightarrow x_0=0, $$ and substituting it, we get $$ (a^+)^2 + y_0^2 = r^2. $$ Analogously, we show that $y_0=0$ and $$ (b^+)^2 = r^2. $$ Putting all together, we conclude that $x_0=y_0=0$, so $C$ is centered at the origin, and its radius satisfies $$ r= a^+ = b^+\ \ \Rightarrow\ \ \arctan\left(\sqrt{2\sqrt{3}-3}\right) = \frac{\pi}{3\sqrt{3}}. $$ A simple computation (I used Wolfram alpha again) shows that the last equality is false.
Why does $\frac{1}{\sqrt{x^2+a^2}} = \frac{1}{x}(1+\frac{a^2}{x^2})^{-\frac{1}{2}}$ when $x \gg a$?
I assume that in your case $x$ is positive. Then just factor the $x$ out of the root: $$\frac{1}{\sqrt{x^2+a^2}} = \frac{1}{x\sqrt{1+a^2/x^ 2}} = \frac{1}{x}\left(1+\frac{a^2}{x^2}\right)^{-\frac{1}{2}}.$$ If $x$ is much bigger than $a$, then the factor to the right is approximately $(1 + \epsilon)^{-1/2} \approx 1 - \frac 12 \epsilon$ by a Taylor first order approximation.
$f$ bounded on $[a,b]$ with one or finite discontinuities implies $f$ Riemann-integrable.
Suppose f is a bounded real valued function on [a, b] such that f 2 ∈ R[a, b]. Does it follow that f ∈ R[a, b]? Does the answer change if we assume f 3 ∈ R[a, b]? Either give a proof or provide a counter example in each of the two cases.
The subring of a field extension contains the field, is a subfield of the field extension
If $a\in R$ and $f(a)=0$ with non-trivial $f\in F[X]$, then $f(X)=c(Xg(X)-1)$ for some $g\in F[X]$ and $c\in F^\times$. Then $b:=g(a)\in R$ and $c(ba-1)=f(a)=0$, so $ba=1$.
Max-turn hamiltonian path in square grids
Answer to OP's question (proof follows): For Hamitlonian paths: if $n$ is even max turns=$n^2-n$ if $n$ is odd max turns equals $n^2-n-1$ The minimum number of turns on an Hamiltonian path on an $n \times n$ lattice is $2n-2$ which can be constructed at least two non-isomorphic ways (rows or spiral) for each graph. The maximum number of turns Hamiltonian path onin a $2 \times n$ lattice is also $2n-2$ To maximize the number of turns, if $n$ is even then you can consider the graph to be a collection of $n \times 2$ graphs each of which can have a maximum of $2n-2$ turns in them starting from an outside corner on each side the $-2$ factor comes from the fact that in order to turn the corner you must have one vertex that does not turn, for an outside $2 \times n$ you lose one going into the corner and one from the starting/ending vertex. For the interior $2 \times n$'s you must have one "straight vertex coming in and one coming out. $2n-2 \times (n/2)$seperate rectangles gives a formula of $n^2-n$ for even $n$. If $n$ is odd, it is not so simple to construct a path because the rows do not divide evenly by $2$. The cases for constrcution of a maximal turning path are different if $n$ is 1 or 3 mod $4$. If $n=1$ mod $4$ you begin again at a corner in the first row you again have $2n-2$ turning vertices in the first $2\times n$ rectangle. You then continue down the side of the graph (a $2\times (n-2)$ rectangle) and collect $2n-4-1$ more turning vertices. Repeat on the bottom of the graph. Continue towards the starting corner now you have a $2 \times (n-4)$ rectangle that yields $2n-8-1$ turning vertices. Continue on towards the center every two new directions you turn the rectangle get smaller and there are $4$ fewer vertices. When you reach the center (where the path ends) you gain only 1 turn for the final $3$ vertices. Putting it all together there are $n-2$ turns in the first rectangle, $1$ turn for the center and the spiral which contains $n-7+4$ rectangles in sets of two beginning with $2n-5$ turns and adding $4$ less for every set. $2n-2+1+[(n-3)(\frac{2n+5-5}{2})]=n^2-n-1$ If $n=3$ mod $4$ then you can proceed as above but the actual center vertex of the graph is contained in the last rectangle before the end of the paththe last $3$ vertices and their $1$ turn remains the same but the path is off-centered in the graph. The formula for the vertices remains the same $2n-2$ for the first rectangle $n-5$ rectangles at an average of $n$ turns per spiral rectangle and one for the last turn of the path still sums to $n^2-n-1$ for all odd $n$. Answer to question asked in comments. For Hamiltonian Cycles If $n$ is odd there is no hamiltonian cycle on a $n \times n$ grid. This can be proved from the contrapositive of theorem 3.16 on page 214 of "Graphs and Digraphs" by Chartrand. By numbering the vertices of an odd $n \times n$ grid from left to right top to bottom from $1,2,3...n^2$ Select the set of vertices $S$ as all the even numbered vertices. Then $|S|=\frac{n^2-1}{2}$ and there are $\frac{n^2+1}{2}$ disconnected components so the graph cannot be hamiltonian. If $n$ is even you can construct a "spiral" cycle like the spirals for the odd paths given above. I haven't found a closed form expression yet that deals with the center of the cycle. $(\frac{3n^2}{4}+\frac{3n}{2}-12)+C$ where $C$ is the number of turns at the center of the graph is a good description of the maximum number of turns in a hamiltonian cycle on a $n \times n$ grid.
Sign of the Hankel representation of the Gamma function
Consider the ray $DE$. The integral is $$\int_{-\infty}^0 (t - i \epsilon)^{z - 1} e^{t - i \epsilon} dt = \int_{-\infty}^0 e^{(z - 1)(\ln |t - i \epsilon| + i \arg(t - i \epsilon))} e^{t - i \epsilon} dt \to \\ e^{-i \pi (z - 1)} \int_{-\infty}^0 (-t)^{z - 1} e^t dt = -e^{-i \pi z} \int_0^\infty t^{z - 1} e^{-t} dt,$$ so the signs in $I$ are off.
Very silly question about an inequality in integration of two functions
Notice you only need to show it for $|\int f | \le \int |f|$. Now $-|f|\le f \le |f|$. Can you finish from there?
Find all idempotent elements in the group algebra $\mathbb CC_3$
You can also invoke the Chinese Remainder Theorem to find all idempotents. Let $\omega := \frac 1 2 (i \sqrt{3} -1)$ be a primitive third root of unity. There are canonical isomorphisms $\mathbb{C} C_3 \cong \mathbb{C}[X]/(X^3-1) \cong \mathbb{C}[X]/(X-1) \times \mathbb{C}[X]/(X-\omega) \times \mathbb{C}[X]/(X-\omega^2) \cong \mathbb{C} \times \mathbb{C} \times \mathbb{C}$. Now you can easily find all idempotents of $\mathbb{C}^3$ and then get those of $\mathbb{C} C_3$ by applying each isomorphism. Note that since the isomorphisms are $\mathbb{C}$-linear, it suffices to consider a basis of $\mathbb{C}^3$.
How to find $v = \frac{u_t}{1-u_x}$ and $u_x = \frac{U_X}{1+U_X}$
Good start. This is a classical derivation in continuum mechanics, where we define coordinates in the reference configuration $(X,t)$ and in the deformed configuration $(x,t)$. The definition of velocity gives $v = \varphi_t$. With $x=\varphi(X,t)$, we have $$ \partial_t U(X,t) = U_t = u_xv + u_t ,\qquad \partial_X U(X,t) = U_X = u_x\varphi_X \, . $$ The definition of displacement gives $U = x-X$. Thus, we also have $$ \partial_t U(X,t) = v , \qquad \partial_X U(X,t) = \varphi_X - 1 \, . $$ Therefore, $v = \frac{u_t}{1-u_x}$ and $u_x = \frac{U_X}{1+U_X}$.
Catalan's number with a twist
This is the generalization (or perhaps the original formulation) of Bertrand's Ballot Problem. The number of such paths is given by the reflection principle and is $$C_{s,\ t} = \frac{s-t+1}{s+1}\binom{s+t}{t}$$ Notice that when $s=t=n$ then we recover the original Catalan numbers $$C_{n,n}=C_n=\frac{1}{n+1}\binom{2n}{n}$$
Proof regarding the limits of sequences
Let $z_n := \frac{x_n}{y_n}$, so $z_n \to L \neq 0$. Given $\epsilon &gt; 0$, let $N_1$ be large enough so that $n \geq N_1 \implies |z_n - L| &lt; \epsilon$. Furthermore, let $N_2$ be large enough so that $n \geq N_2 \implies |z_n| &gt; \frac{L}{2}$. Then, for $n \geq \max\{N_1,N_2\}$, we have that: $$ \left|\frac{1}{z_n} - \frac{1}{L}\right| = \left|\frac{z_n - L}{z_nL}\right| &lt; \frac{2\epsilon}{L^2} $$ So $\frac{1}{z_n} \to \frac{1}{L}$.
Why is it sometimes it seems like you can integrate with respect to x or y and treat the other as a constant, and other times you can't?
In the first case, $y$ is an unknown function of $x$ (and you are integrating with respect to $x$). In principle, you could "compute" $\int y\,dx$ but the result will be an anti-derivative of $y$, not just $xy$. Of course, since $y$ is unknown, we can't write down its anti-derivative directly. In the second case, $f$ is a function of two variables, $x$ and $y$ and you are integrating with respect to one of them.
Sigma-algebra generated by a random variable
As taking the inverse image is a cool operation. It commutes with practically everthing, here: with $\sigma(-)$. Lemma. Let $\Omega$, $R$ be any two sets, $X \colon \Omega \to R$ a map and $\mathcal A \subseteq \mathfrak P(R)$. Then $$ \sigma\bigl(X^{-1}(\mathcal A)\bigr) = X^{-1}\bigl(\sigma(\mathcal A)\bigr) $$ Proof. To show $\subseteq$ note that the right hand side is a sigma algebra, as $\sigma(\def\A{\mathcal A}\A)$ is one and $X^{-1}(A^c) = X^{-1}(A)^c$ and $X^{-1}(\bigcup_n A_n) = \bigcup_n X^{-1}(A_n)$ hold for any $A, A_n \in \sigma(\mathcal A)$. As $X^{-1}(\A) \subseteq X^{-1}\bigl(\sigma(A)\bigr)$, we are done, since this implies now $\sigma\bigl(X^{-1}(\A)\bigr) \subseteq X^{-1}\bigl(\sigma(\A)\bigr)$. To see $\supseteq$, let $\mathcal B := \bigl\{A \in \sigma(\A) : X^{-1}(A) \in \sigma\bigl(X^{-1}(\A)\bigr)\bigr\}$. As above, we see that $\mathcal B$ is a $\sigma$-algebra, and we have that $X^{-1}(A) \in \sigma \bigl(X^{-1}(\A)\bigr)$ for $A \in \A$. Hence $\A \subseteq \mathcal B$, therefore $\sigma(\A) \subseteq \mathcal B$ or - by definition of $\mathcal B$ - $X^{-1}\bigl(\sigma(\A)\bigr) \subseteq \sigma\bigl(X^{-1}(\A)\bigr)$. $\square$ Now, as $\{(-\infty, x] : x \in \mathbf R\}$ generates $\mathrm{Bor}(\mathbf R)$, by the lemma above $\{X^{-1}(-\infty, x]: x \in \mathbf R\}$ generates $\sigma(X)$.
Is the following identity correct? $e^1 = \int_0^1 (1 + nx^n) e^{x^n} dx\, , \forall n$
$$\int \big[f(x)+xf'(x)\big]dx = xf(x)$$ where $f(x)=e^{x^n}$ So that $$\int (1 + nx^n) e^{x^n} dx=e^{x^n}\,, \forall n$$ which can be calculated according to any limits. So, what you've done is correct for all $n$!
How to compute the average weight of an undirected graph?
When you iterate over the vertices you will count every edge $e=\{v,w\}$ exactly twice (once when you visit $v$ and once when you visit $w$). Thus you can sum up all weights of incident edges for every vertex and divide the result by $2\cdot m$ (where m is the amount of edges) to get the average weight.
Can you determine the average second derivative from a set of points?
You can't compute the exact average of the second derivative with just the function's values at the points. You need the value of the derivatives at those points too. Then the value is given by $$\bar{f}''=\frac{f'(b)-f'(a)}{b-a}$$ The functions's value at a f finite set of points give you an average of the derivative, but no estimate of the value of the derivative at those points, so there is not enough information here.
$f'(x)=f(x)$ and $f(0)=0$ implies that $f(x)=0$ formal proof
An implicit assumption is that the function is defined on some open interval containing $0$. Set $$g(x)=e^{-x}f(x)$$ and compute the derivative: $$g'(x)=-e^{-x}f(x)+e^{-x}f'(x)=-e^{-x}f(x)+e^{-x}f(x)=0$$ so the function $g$ is constant on the interval where it's defined. Since $g(0)=e^{-0}f(0)=0$ you can conclude that $g(x)=0$ for all $x$ and therefore also $f(x)=0$ for all $x$. Without the initial assumption, you can get different functions with that property: define, for instance, $$f(x)=\begin{cases} 0 &amp; \text{if $-1&lt;x&lt;1$}\\ e^x &amp; \text{if $2&lt;x&lt;3$} \end{cases}$$ Then $f$ satisfies the requirements, but it's not constant.
Find the interval of convergence $\sum\limits_{n=1}^\infty \frac{(3n)!(x^n)}{(n!)^3}$
Observe that \begin{align} \sum_{n=1}^\infty\frac{(3n)!\left(\frac{1}{27}\right)^n}{(n!)^3} &amp;=\sum_{n=1}^\infty\frac{(3n)!}{\left[3^n(n!)\right]^3}\\ &amp;=\sum_{n=1}^\infty\frac{(3n)!}{\left[3\cdot6\cdot9\cdots3n\right]^3}\\ &amp;=\sum_{n=1}^\infty\left[\frac{1\cdot2}{3^2}\cdot\frac{4\cdot5}{6^2}\cdots\frac{(3n-2)(3n-1)}{(3n)^2}\right]. \end{align} By using the limit comparison test with the divergent series $\displaystyle\sum_{n=1}^\infty\frac{1}{n}$, we see that \begin{align} &amp;\lim_{n\to\infty}\frac{\frac{1\cdot2}{3^2}\cdot\frac{4\cdot5}{6^2}\cdots\frac{(3n-2)(3n-1)}{(3n)^2}}{\frac{1}{n}}\\ &amp;\qquad=\lim_{n\to\infty}\left[\frac{1\cdot2}{3^2}\cdot\frac{4\cdot5}{6^2}\cdots \frac{(3n-5)(3n-4)}{(3n-3)^2}\cdot\frac{(3n-2)(3n-1)}{9n}\right]\\ &amp;\qquad=\lim_{n\to\infty}\left[\frac{1\cdot2}{3^2}\cdot\frac{4\cdot5}{3\cdot6}\cdots \frac{(3n-5)(3n-4)}{(3n-6)(3n-3)}\cdot\frac{(3n-2)(3n-1)}{(3n-3)3n}\right]\\ &amp;\qquad\ge \frac{1\cdot2}{3^2}=\frac{2}{9}. \end{align} Hence $\displaystyle\sum_{n=1}^\infty\frac{(3n)!\left(\frac{1}{27}\right)^n}{(n!)^3}$ diverges.
Price of a commodity converges to a limiting price
Kyle, you're answer should be , not {pk}=c−a+dpk−1b
How $A=\{x_n:n\in \mathbb N\}$ is an infinite set?Can you explain the proof?
Here is an alternative argument. Assume that $A$ is finite. Then there must exist some $a \in A$ such that $x_n = a$ for infinitely many $n$. There exist $U \in \mathcal O$ such that $a \in U$ and $\varepsilon &gt; 0$ such that $B_d(a,\varepsilon) \subset U$. Take any $n$ such that $x_n = a$ and $\frac{1}{n} &lt; \varepsilon$. Then $B_d(x_n,\frac{1}{n}) \subset U$ which is impossible.
Description of Model of Euclidean Geometry found in the Hyperbolic Plane
Given the Hyperbolic plane and ond any origin point, introduce $(r,\theta)$ polar coordinates. That is, each point in the plane has a distance $0\le r$ from the origin and an angle of $\theta$ from a reference angle. Map any point with $(r,\theta)$ coordinates to the point in the Euclidean plane with the same polar coordinates. This is a one-to-one mapping between the entire Hyperbolic plane and the entire Euclidean plane giving a model of the Euclidean plane in the Hyperbolic plane. Of course, it is not an isometric or even conformal model. If you are allowed to use Hyperbolic 3 space, then any Horosphere is an isometric embedded model of the Euclidean plane. This is exactly similar to the situation in Euclidean 3 space where the surface of any sphere (antipodal points identified) is an isometric embedded model of the elliptic plane in Elliptic geometry.
If $\{v_1, v_2, ..., v_n\}$ is a basis and $f$ is an injective morphism, show that $\{f(v_1), f(v_2), ..., f(v_n)\}$ is linearly independent.
Hint 1: Since $f$ is injective, the kernel of $f$ is $\{0\}$ (i.e., the only vector from $V_{1}$ being mapped to the $0$ vector in $V_{2}$ is the $0$ vector -- can you prove this?). Hint 2: $a_{1}f(v_{1}) + \dots + a_{n}f(v_{n}) = f(a_{1}v_{1} + \dots + a_{n}v_{n})$. From hint 2, if the LHS $= 0$, what does that tell us about $a_{1}v_{1} + \dots + a_{n}v_{n}$? What does this imply about $a_{1}, \dots, a_{n}$?
Parameters in the Hamilton-Jacobi Equation
Notice that the Hamilton–Jacobi (HJ) eq. does not depend on the un-differentiated function $S$. Hence, if $S$ is a solution to the HJ eq., then adding any constant $\alpha_{n+1}$ would trivially be a new solution $S+\alpha_{n+1}$. Traditionally, one does not bother to include this trivial integration constant $\alpha_{n+1}$ in the tally of integration constants $\alpha_{1}, \ldots ,\alpha_{n}$. References: H. Goldstein, Classical Mechanics, Section 10.1.
Rigorous proof that $dx dy=r\ dr\ d\theta$
In the geometric approach, $dr^2=0$ as it is not only small but also symmetric (see here). In the algebraic, more rigorous approach, you are deriving $x$ by $\theta$ and $y$ by $r$, but you are forgetting the cross terms! These are function of two variables each, so you should do all partial derivatives: $$ dx=\dfrac{\partial x}{\partial r}dr+\dfrac{\partial x}{\partial \theta}d\theta $$ $$ dy=\dfrac{\partial y}{\partial r}dr+\dfrac{\partial y}{\partial \theta}d\theta\;. $$ Deriving (you did already two of them, here we do all four): $$ dx=\sin\theta\,dr+r\,\cos\theta\,d\theta $$ $$ dy=\cos\theta\,dr-r\,\sin\theta\,d\theta\;. $$ The exterior product of these two vectors (see link above, or think of the cross product, or think of the area of the parallelogram) is: $$ dx \, dy = (\sin\theta\,dr)(-r\,\sin\theta\,d\theta)-(\cos\theta\,dr)(\cos\theta\,d\theta) $$ $$ = -r(\sin^2\theta +\cos^2\theta)\,dr\,d\theta=-r\,dr\,d\theta. $$ Usually one takes instead $x=\cos\theta,y=\sin\theta$, so that the minus sign becomes a plus. But anyway, what matters is the absolute value of that thing. I think that what you were missing is the link between area (and more generally volumes) and exterior product. Look it up starting from the link above, it's very interesting.
What is an algorithm to compute the minimal polynomial?
For the 2 x 2 case, here's an algorithm: Compute the characteristic polynomial. If it's not a perfect square, then it actually IS the minimal polynomial. If it IS a perfect square, say $(x - a)^2$, then your matrix is either $aI$, in which case the minimal polynomial is $(x-a)$, or it's not, in which case the minimal polynomial is $(x-a)^2$. Why does this work? 1. The minimal polynomial always divides the characteristic polynomial. The minimal polynomial of the Jordan form is the same as the minimal polynommial of $M$. If $M$ has two distinct eigenvalues, then its jordan form is $$ \begin{bmatrix} \lambda_1 &amp; 0 \\ 0 &amp; \lambda_2 \end{bmatrix} $$ and it's evident that the minimal polynomial of this is $(x - \lambda_1)(x - \lambda_2)$. If it has duplicate eigenvalues, then the Jordan form looks like either $$ \begin{bmatrix} \lambda_1 &amp; 0 \\ 0 &amp; \lambda_1 \end{bmatrix} $$ or $$ \begin{bmatrix} \lambda_1 &amp; 1 \\ 0 &amp; \lambda_1 \end{bmatrix} $$ In the first case, your matrix is conjugate to $\lambda_1 I$, hence must equal $\lambda_1 I$, whose minimal polynomial is $(x - \lambda_1)$. In the second case, the matrix doesn't satisfy that polynomial, so the min poly must have two $(x - \lambda_1)$ factors, hence must be $(x - \lambda_1)^2$.
Mixing tank differential equation for mass
$$\frac{dm}{dt}=q_{in} - q_{out}$$ For $0 \leq t \leq T$ $$q_{in}=kc_{in}$$ $$q_{out}=\frac{km}{V_0}$$ So, the governing equation is $$\frac{dm}{dt}=kc_{in}-\frac{km}{V_0}$$ Since, $m(0)=0$, the solution is $$e^{\frac{k}{V_0}t}m=kc_{in}t$$ Thus, $$m_T=m(T)=e^{-\frac{k}{V_0}T}kc_{in}T$$ Then, from $T\leq t$, $q_{in}=0$. So, $$m=m_Te^{-\frac{k}{V_0}t}$$
About Sobolev-Poincare inequality on compact manifolds
No, you can't have $p&lt;1$ there. Take the constant function $u\equiv \lambda$. Your inequality becomes $$\|\lambda\|_{2^*}\le \|\lambda\|_1^p$$ which (if $p&lt; 1$) fails when $\lambda$ is large enough. When you imagine an inequality you'd like to be valid, consider how it scales when $u$ is multiply by a positive number, or (when working on vector spaces) when the argument of $u$ is rescaled.
The order deduced from relations in $D_n$
Actually proving anything about a group presented using generators and relations involves tricky combinatorial maneuvers -- unless you use the universal property. Usually "proofs" not using the universal property appeal to intuition and are incomplete and kind of hand-wavy. Here's a "solid" proof. The universal property: Let $G=\langle X|R \rangle$ ($G$ is generated by the elements of $X$ with relations $R$). Then let $f$ be any function from $X$ to some group: $f:X \to H$ where given any relation: $x_1^{e_1}x_2^{e_2}\cdots x_n^{e_n} \in R$ (here $x_i \in X$ and $e_i = \pm 1$) we have $f(x_1)^{e_1}f(x_2)^{e_2}\cdots f(x_n)^{e_n}=1$ (essentially images of relations must map to the identity). Then there exists a unique homomorphism $\hat{f}:G \to H$ such that $\hat{f}(x) = f(x)$ for all $x \in X$. This universal property is really the definition of what we mean by a group presented via generators and relations. So $D_n = \langle a,b \;|\; a^n=1,b^2=1,abab=1 \rangle$ has such a universal property. Consider $f:\{a,b\} \to H$ defined by $f(a)=R_{360^\circ/n}$ and $f(b)=V$ where $H$ is the group of isometries of a regular $n$-gon, $R_{360^\circ/n}$ is a rotation of $360^\circ/n$, and $V$ is some reflection in $H$ ($H$ is a concrete realization of $D_n$). Notice that the relation $a^n=1$ translates to $f(a)^n=R_{360^\circ/n}^n=R_{0^\circ}$, $b^2=1$ translates to $f(b)^2=V^2 =R_{0^\circ}$. Finally, $abab=1$ translates to $f(a)f(b)f(a)f(b)=R_{360^\circ/n}VR_{360^\circ/n}V=WW$ where $W$ is some reflection so $f(a)f(b)f(a)f(b)=R_{0^\circ}$. Thus all relations are satisfied. Therefore, there exists a unique homomorphism extending $f$, call it $\hat{f}:D_n\to H$. Now, $\hat{f}(a)=f(a)=R_{360^\circ/n}$ so the order of $\hat{f}(a)$ is $n$. Thus the order of $a$ must be a multiple of $n$. However, $a^n=1$ so the order of $a$ is no more than $n$. Thus it is $n$. We could have used $2 \times 2$ matrices to realize $D_n$ (but that's too much trouble to typeset). Also, without too much more work, we could have proved that $\hat{f}$ is actually an isomorphism. To prove the order of $b$ is 2 we can use the concrete group $\mathbb{Z}_2$. Define $f:\{a,b\}\to \mathbb{Z}_2$ by $f(a)=0$, $f(b)=1$ (in $\mathbb{Z}_2$). Let's verify the relations (keep in mind that we use additive notation for $\mathbb{Z}_2$): $nf(a)=n0=0$, $2f(b)=2\cdot 1=0$, $f(a)+f(b)+f(a)+f(b)=0+1+0+1=0$. Therefore, there exists a unique homomorphism extending $f$, say $\hat{f}:D_n \to \mathbb{Z}_2$. Again, $\hat{f}(b)=1$ has order 2, so $b$'s order must be a multiple of 2. However, $b^2=1$ so its order is no more than 2. Therefore, its order is exactly 2. In the end, you prove things about your generator/relation group by using concrete groups (concrete homomorphic images). Note: It is tempting to try to prove the order of $a$ is $n$ using a group like $\mathbb{Z}_n$, but $\mathbb{Z}_n$ isn't a homomorphic image of $D_n$ (in general) so this doesn't work. :(
A question about the definition of partition in Riemann Integral
In the theory of the Riemann integral on an interval $[a,b]$, it is completely standard that "partitions" of $[a,b]$ are necessarily finite. This is what Riemann did for his Riemann sums. What Rudin gives is not Riemann's approach but a slightly simpler one of G. Darboux, which uses upper and lower sums and upper and lower integrals: Darboux used finite partitions too. (Perhaps following Rudin, many people do not distinguish between the integrals of Riemann and Darboux: although they are defined differently, they can be shown to yield the same linear functional, in particular with the same domain of integrable functions: those which are bounded and with a zero measure set of discontinuities.) Thus your question "Are finite partitions sufficient?" is a little strange from the standard perspective: sufficient for what? All the usual theorems and proofs use finite partitions. In fact I own or have flipped through at least a dozen texts treating the Riemann integral, and to the best of my recollection I have never seen a "proper" Riemann integral using countably infinite partitions. Could you include in your question the precise definition you learned? Is there any textbook which uses this definition? (Are there any advantages to doing so?)
Parameterise a line represented by n points
Yes, we can parameterize that but the greater the number of segments the uglier the parameterization becomes because of the cases in the definition. For a line segment between two points $p, q$ we parametrize the segment by $$f(t)=(1-t)p + tq$$ where $t\in [0,1]$. If we want to add another point $r$ to get two line segments: one between $p$ and $q$ and another between $q$ and $r$ we have to do something like $$f(t) = \begin{cases} (1-t)p + tq, &amp; 0\le t\le 1 \\ (2-t)q + (t-1)r &amp; 1\lt t\le 2 \end{cases}$$ We continue in this manner until we include $n$ different cases for the $n$ different segments. This parameterization is not unique. Nothing says we have to travel along each segment in the same amount of time. We can also rescale the domain so the entire parameterization happens in an interval other than size $n$. Update If you are writing a computer program and have many points then we don't need a parameterization as above. We can reuse the parameterization of one segment along with a look up table to navigate along the curve. Store the individual segments as pairs of points in a 0-indexed array. The first segment will be segment $0$ and the $n$th segment will be $n-1$. At time $t$, $0\le t\lt n$, use the integer part to find the segment and plug the remainder into the formula for a single line segment above. Use the points in the array for $p$ and $q$ in the formula.
Direct sums, submodules and simple tops
No: Take any field $F$. Let $A=X_1=X_2=F$ and $K=\{(x,x)\in X_1 \oplus X_2 : x \in F\}$. Then $X_1$, $X_2$, and $X_1\oplus X_2 /K$ are all simple, so have simple tops, but $K$ is not contained in either $X_1$ or $X_2$.
Greatest common divisor of rational polynomials in $\mathbb{Q}[x]$
Hint $\ $ If $\,p\,$ is a prime then $\,\gcd(pa,b) = p\gcd(a,b/p)\,$ if $\,p\mid b\,$ else $\,\gcd(a,b)$ Use this recursively as $\,p\,$ ranges over all the primes factors of $\,a(x),\,$ viz. $\,x\!-\!1,\ x\!+\!1,\ x+2,\ $ to compute the required gcds with $\,a(x).$ Alternatively use $\ \gcd(p^i f, p^j g) = p^{\min(i,j)}\gcd(f,g)\ $ if $\ p\nmid f,g$ Note $\,x^2-1 = (x-1)(x+1)\,$ and $\,x^2+3x+2 = (x+1)(x+2),\,$ or else use the factor theorem, i.e. $\,x-k\mid f(x)\iff f(k)=0.$
Is this a valid argument to show that for $m > 4$, $2^m - 3^n \ne 7$
Catalan’s conjecture is way overkill. Since $2^i-3^j&lt;2^i+3^j$ are integer and the second is positive, the only way for their product to be $7$ is if $2^i+3^j=7.$ But $m\geq 6$ means $i\geq 3,$ so $$2^i+3^j&gt;2^3=8&gt;7.$$ However, Catalan might be needed to prove the more general result that there is no prime $p\equiv 7\pmod{12}$ and $m&gt;4,n&gt;0 $ such that: $$p=2^m-3^n$$ Edit: (From commenter Erick Wong) It turns out, it is pretty easy to show there is no $2^i-3^j=1$ with $i&gt;2$ just by looking modulo $8,$ so you don’t need Catalan for that, either.
Convergence in norm equivalent formulations
No its not. Let $g_n=1_{B_r(0)}$ and $g=-1_{B_r(0)}$, then $\|g_n\|_p=\|g\|_p$ but $g_n$ does not converge to $g$ in $L^p$.
Profinite groups and extension of a monomorphism
The profinite completion of any profinite topological group is just itself: the identity map $G\to G$ satisfies the universal property of the profinite completion. So $G=G^\vee$ and $\varphi^\vee=\varphi$, so $\varphi^\vee$ is injective if $\varphi$ is injective.
How to show this function is an injection (one to one)?
Suppose $(a,b) \neq (x,y)$ but $f(a,b) = f(x,y)$, then $$ a + b\sqrt{11} = x + y \sqrt{11} \implies (b - y) \sqrt{11} = x - a \implies \sqrt{11} = \frac{x-a}{b-y} \in \mathbb{Q}$$ contradiction, so $(a,b) = (x,y) $. Notice, we can assume $b \neq y$, otherwise we would have $a = x$. So the division by $b -y$ is allowed. Notice, also: any $f: \mathbb{N} \times \mathbb{N} \to \mathbb{R} $ such that $f(a, b) = a + b \sqrt{p} $ where $p$ is prime is always injective.
Limit Trigonometric arctan
$\arctan x = x - \dfrac{1}{3}x^3 + \dfrac{1}{5}x^5 + O(x^6)$ $(\arctan x)^3 = x^3 - x^5 + O(x^6)$ $2\sin x = 2x - \dfrac{1}{3} x^3 + \dfrac{1}{60}x^5 + O(x^6)$ $\sin(2x) = 4x - \dfrac{8}{3} x^3 + \dfrac{8}{15}x^5 + O(x^6)$ Then \begin{equation} \lim_{x \to 0} \frac{x^3 + \sin(2x) - 2\sin(x)}{\arctan(x^3) - \arctan(x)^3} = \lim_{x \to 0} \dfrac{\dfrac{1}{4}x^5+ O(x^6)}{x^5 + O(x^6)} = \lim_{x \to 0} \dfrac{\dfrac{1}{4}+ O(x)}{1 + O(x)} = \frac{1}{4} \end{equation}
how to derive an $O(\log(n))$ expression for a sequence with $3$ terms
A nice way to find formulae for sequences like these are with matrices. We can write $$a_n=12a_{n−1}−384a_{n−2}+4096a_{n−3}$$ as $$ \begin{bmatrix} a_{n} \\ a_{n-1} \\ a_{n-2} \\ \end{bmatrix} = \begin{bmatrix} 12 &amp; -384 &amp; 4096 \\ 1 &amp; 0 &amp; 0 \\ 0 &amp; 1 &amp; 0 \\ \end{bmatrix} \begin{bmatrix} a_{n-1} \\ a_{n-2} \\ a_{n-3} \\ \end{bmatrix} $$ or $$ \begin{bmatrix} a_{n+2} \\ a_{n+1} \\ a_{n} \\ \end{bmatrix} = \begin{bmatrix} 12 &amp; -384 &amp; 4096 \\ 1 &amp; 0 &amp; 0 \\ 0 &amp; 1 &amp; 0 \\ \end{bmatrix}^n \begin{bmatrix} a_{2} \\ a_{1} \\ a_{0} \\ \end{bmatrix} $$ Matrix exponentiation can be done quickly on a computer by diagonalizing the matrix, as then you just have to exponentiate a diagonal matrix. You can also use this to get an explicit formula of $a_n$. To get the explicit formula you can look at the diagonalization of the matrix. $M = PDP^{-1}$ where the diagonal of D is the eigenvalues and the columns of P the corresponding eigenvectors of M. Plugging into numpy I get the approximate values of $$ P = \begin{bmatrix} 0.99865811 &amp; 0.99865811 &amp; 0.99584852 \\ 0.00135862-0.05170075i &amp; 0.00135862+0.05170075i &amp; 0.09065119 \\ -0.00267471-0.00014067i &amp; -0.00267471+0.00014067i &amp; 0.0082519 \end{bmatrix} $$ and $$ D = \begin{bmatrix} 0.50725095+19.30279482i &amp; 0 &amp; 0 \\ 0 &amp; 0.50725095-19.30279482i &amp; 0 \\ 0 &amp; 0 &amp; 10.98549811 \end{bmatrix} $$ although you could solve for the eigenvalues and eigenvectors analytically to get exact values for P and D. You can then invert P (also can be done exactly) at which point you have $$ \begin{bmatrix} a_{n+2} \\ a_{n+1} \\ a_{n} \\ \end{bmatrix} = PD^nP^{-1} \begin{bmatrix} a_{2} \\ a_{1} \\ a_{0} \\ \end{bmatrix} $$ Multiplying out the matrices will then give you a vector with exact equations for $a_{n+2}$, $a_{n+1}$, and $a_{n}$. If you do this on a computer you will have round off error which means the solutions will not be exact for large n, but will give you a good approximation. If you want an exact solution you can do all the above math analytically (it will be pretty messy but not impossible) and that will give you exact formulas. Hope that makes sense!
Calculation of expected value given cdf
You cannot find the exact value of $\alpha$ but you can calculate the expectation, as you did $$\int_0^2(1-F)dx=\dots =2-\frac{8}{3}\alpha$$ Where $0&lt;\alpha \leq \frac{1}{4}$
For $p\geq1$, $f\in L^p(\mathbb{R}^n)$ and $\lambda>0$, does $\{x\in\mathbb{R}^n:|f(x)|>\lambda\}$ have finite measure?
$$ |\{|f|&gt;\lambda\}|=\int_{|f|/\lambda&gt;1}dx\le\int_{\mathbb{R}^n}\frac{|f|^p}{\lambda^p}\,dx=\frac{\|f\|_p^p}{\lambda^p}. $$
K-theory for non-separable C*-algebras
For every Hilbert space $H$, $$K_0(\mathcal{K}(H))=\mathbb{Z}$$ because the only compact projections in $\mathcal{B}(H^{\oplus n})\cong M_n(\mathcal{B}(H))$ are finite-rank projections and these are distinguished only by dimension. Actually $$K_0(\mathcal{K}(E)) = \mathbb{Z}$$ for any Banach space $E$. Moreover, if $H$ is infinite-dimensional then $$K_0(\mathcal{B}(H))=\{0\},$$ because $H\cong H\oplus H$, hence $[{\rm id}_H]\oplus [{\rm id}_H] = [{\rm id}_H]$ so it must be the trivial group. As Atkinson's theorem works well in non-separable spaces, the $K_1$-groups are the same as for the separable Hilbert space.
What is the difference between an indeterminate and variable?
An indeterminate is a syntactic tool that is used to write down expressions in a certain way. If we are considering polynomials, for example, we may use $x$ as an indeterminate to write them down easily: $1 + 4x - x^4$. If $x$ is an indeterminate, then this is just a way that we can write down the list of coefficients $(1, 4, 0, 0, -1)$ more easily. We use the indeterminate $x$ for stylistic reasons and to ease computation; formally speaking, we are really talking about the list of coefficients. On the other hand, a variable is used to represent an element of some set. Consider these examples: "Let $x$ be an integer." "For all integers $n$, $n$ is even or odd." "Let $x = 37$." In all cases, the variable stands for an element of a set, whether it be a specific element or an arbitrary element. Variables are what we use to go about mathematical reasoning. Variables are fundamental to writing proofs; we cannot prove something about all elements of a set or any for all or there exists statement without utilizing variables. Of course, usually when we use an indeterminate (at least, in the case of formal power series and polynomials) we want to emphasize a sort of analogy with variables, because it is possible to "plug in" a specific value for that indeterminate in the same way we can plug in a specific value to a variable. Therefore, in some ways indeterminates just stand for a particular kind of variable (but not the other way around).
Given a polynomial $P(x,y) = \sum\limits_{m,n = 0} {{a_{mn}}{x^m}{y^n}} $
One way is to start by finding a suitable bound on the individual terms $x^m y^n$. Suppose we can find an $L$ such that $|x_1^m y_1^n - x_2^m y_2^n| \le L \|(x_1,y_1)-(x_2,y_2)\|$ for all $m,n$ and $(x_1,y_1),(x_2,y_2) \in [-R,R]^2$. Then you have $|P(x_1,y_1)-P(x_2,y_2)| \le |\sum_{m,n} a_{mn} (x_1^m y_1^n)-x_2^m y_2^n) ) | \le (L \sum_{m,n} |a_{mn}|) \|(x_1,y_1)-(x_2,y_2)\|$, from which you can choose an explicit $\delta&gt;0$. To see how we can find the $L$, we use the mean value theorem on $\phi((x,y))= x^my^n$ to bound $|\phi((x_1,y_1))-\phi((x_2,y_2))|$ in $[-R,R]^2$. This will give some $L_{mn}$, then let $L = \max_{m,n} L_{mn}$.
Error propagation into the solution of an ODE
$x=0$ is still $x=0$ after a change in $a$ or $b$. $$ - \frac{2(a+\Delta a)}{3(b + \Delta b} = - \frac{2a}{3b} + \frac{2(a \Delta b - b \Delta a)}{3 b (b + \Delta b)}$$ If $\Delta b$ is small compared to $b$, the term on the right is approximately $$ \dfrac{2 a}{3b^2} \Delta b - \dfrac{2}{3b} \Delta a$$
How to calculate this integral using FTC
Sketch of proof: Derive the function $$g(x)=\int_0^{\sin^2 x}\arcsin\sqrt t\,dt+\int_0^{\cos^2 x}\arccos\sqrt t\,dt$$ to obtain that $g'(x)= 0$ for all $x\ne \frac k2\pi$ and, hence, that $g$ is constant. Look for a (easy) real number $\xi&gt;0$ such that $\sin^2\xi=\cos^2\xi$. Use the identity $\arccos x=\frac\pi2-\arcsin x$ to obtain $$g(\xi)=\int_0^{\sin^2 \xi}\arcsin\sqrt t\,dt+\int_0^{\cos^2 \xi}\arccos\sqrt t\,dt=\\=\int_0^{\sin^2\xi}\arcsin\sqrt t+\arccos\sqrt t\,dt=\frac\pi2\sin^2\xi$$
Limited partial sum of $\displaystyle \sum _{n=1} ^{k} \cos(nx)$ are limited?
Certian the numerator is bounded, and the denominator is fixed. Since $x\ne c\pi$ for any $c\in\Bbb Z$, $\sin x/2\ne 0$. Hence, $$\left|\sum_{n=1}^k\cos nx\right|\le\frac 1{\sin x/2}.$$
Polynomial term for random walk large deviations
Here is a partial answer. A heuristic computation. Let $D(x||p) = x \log\left(\frac{x}{p}\right) + (1-x)\log\left(\frac{1-x}{1-p}\right)$ denote the Kullback-Leibler divergence of $\operatorname{Ber}(x)$ w.r.t. $\operatorname{Ber}(p)$ and write $r = 2\sqrt{p(1-p)}$ for simplicity. Then $$ \frac{\sqrt{2\pi n}}{r^n} \binom{n}{k}p^k(1-p)^{n-k} \approx \frac{1}{\left(\frac{k}{n}\left(1 - \frac{k}{n}\right) \right)^{1/2}} e^{-n \left( D(\frac{k}{n}||p) - D(\frac{1}{2}||p) \right)} $$ tells that we can expect \begin{align*} \mathbf{P}[S_n \leq 0] &amp;\approx \frac{r^n}{\sqrt{2\pi n}} \sum_{k=1}^{\frac{n}{2}} \frac{1}{\left( \frac{k}{n}\left(1 - \frac{k}{n}\right) \right)^{1/2}} e^{-n \left( D(\frac{k}{n}||p) - D(\frac{1}{2}||p) \right)} \\ &amp;\approx \frac{r^n}{\sqrt{2\pi n}} \sum_{j\in\{\frac{n}{2}-k:1\leq k\leq\frac{n}{2}\}} 2e^{jD'(\frac{1}{2}||p)}, \qquad (j = \tfrac{n}{2}-k) \\ &amp;\approx \frac{r^n}{\sqrt{2\pi n}} \cdot\frac{2p}{2p-1} \left( \frac{1-p}{p} \right)^{\frac{1}{2}(n \text{ mod } 2)}. \end{align*} This matches numerical computations as well, and I believe that this can be justified with some efforts. (Obtaining the correct order only up to constants would be easier.) I will update my answer if I come up with a rigorous justification. A more rigorous computation. We have the following result: Claim. For each $p \in (\frac{1}{2}, 1)$ and $n\geq 1$, let $N_n$ be a binomial random variable with parameters $n$ and $p$. Then there exist constants $0 &lt; c_1 &lt; c_2$, depending only on $p$, such that $$ \forall n \geq 1 \ : \quad c_1 \frac{r^n}{\sqrt{n}} \leq \mathbf{P}[N_n \leq n/2] \leq c_2 \frac{r^n}{\sqrt{n}} $$ with $r = 2\sqrt{p(1-p)}$. Proof. By the quantitative version of the Stirling's approximation, $n! = \Theta( n^{n+\frac{1}{2}}e^{-n} )$ for $n \geq 1$, where $\Theta$ is the Big-Theta notation. Thus for $1 \leq k \leq \frac{n}{2}$, \begin{align*} \binom{n}{k} p^k (1-p)^{n-k} &amp;= \Theta \Bigg( \sqrt{\frac{n}{k(n-k)}} \cdot \frac{n^n}{k^k (n-k)^{n-k}} p^k(1-p)^{n-k} \Bigg) \\ &amp;= \Theta \Bigg( \sqrt{\frac{n}{k(n-k)}} e^{-nD(\frac{k}{n}||p)} \Bigg), \end{align*} where $D(x||p) := x \log\left(\frac{x}{p}\right) + (1-x)\log\left(\frac{1-x}{1-p}\right)$. Plugging this to the summation describing $\mathbf{P}[N_n \leq n/2]$, we obtain \begin{align*} \mathbf{P}[N_n \leq n/2] &amp;= (1-p)^n + \sum_{k=1}^{\lfloor n/2 \rfloor} \binom{n}{k} p^k (1-p)^{n-k} \\ &amp;= (1-p)^n + \Theta \Bigg( \frac{r^n}{\sqrt{n}} \underbrace{ \sum_{k=1}^{\lfloor n/2 \rfloor} \frac{1}{\left(\frac{k}{n}\left(1 - \frac{k}{n}\right) \right)^{1/2}} e^{n \left( D(\frac{1}{2}||p) - D(\frac{k}{n}||p) \right)} }_{=(*)} \Bigg) \end{align*} In the last line, we utilized the identity $r = e^{-D(\frac{1}{2}||p)}$. Let $I_n = \{ \frac{n}{2}-k : 1 \leq k \leq \frac{n}{2} \}$ and rewrite the above sum $(*)$ as \begin{align*} (*) &amp;= \sum_{j \in I_n} \frac{1}{\left(\frac{1}{4} - \frac{j^2}{n^2} \right)^{1/2}} e^{n \left( D(\frac{1}{2}||p) - D(\frac{1}{2}-\frac{j}{n}||p) \right)} \end{align*} Now notice that $x \mapsto D(x||p)$ is convex and strictly decreasing on $(0, \frac{1}{2}]$. From this, it follows that $$ n \left( D(\tfrac{1}{2}||p) - D(\tfrac{1}{2}-\tfrac{j}{n}||p) \right) \leq j \frac{\partial D}{\partial x}\left(\tfrac{1}{2}||p\right) = j \log\left( \frac{1-p}{p} \right). $$ Since $0 &lt; \frac{1-p}{p} &lt; 1$, Using this, we can prove that $ (*) = \Theta\left( \sum_{j=0}^{\infty} \left( \frac{1-p}{p} \right)^j \right) = \Theta(1) $ as $n\to\infty$. Therefore the claim follows.
The best workbook of Discrete Math
Graham, Knuth, Patashnik, Concrete mathematics Ross, Wright, Discrete mathematics I do prefer the 1st one, however it does not cover the whole discrete maths. So, the widder book seems to be the second one.
$T(n) = T (\frac{n}{5}) + \frac {n}{\log (n)}$ Solving
Let $f_k = T(5^k)$. The stated recurrence equation translates into $f_k = f_{k-1} + \frac{c}{k} 5^k$, where $c = \frac{1}{\log(5)}$. The solution is simple: $$ f_k = f_1 + \sum_{q=2}^{k} \frac{c}{q} 5^{q} = f_1 + c \sum_{q=2}^{k} \int_0^5 a^{q-1} \mathrm{d}a = f_1 + \frac{1}{\log 5} \int_0^5 \frac{a^k -a}{a-1} \mathrm{d}a $$
Cardan angle (zxz, zxzxz) rotation
(1) No -- the point (one of the points) is that they give different results. Therefore simply giving three angles is not sufficient to specify a rotation in space; one must also have agreed which of the 12 conventions one is using. (Each of the conventions can specify all rotation matrices, but they do it with different angle triples). (2) The display above the table of 12 matrices show the rotation matrices $\mathrm{Rot}(Y,\theta)$, $\mathrm{Rot}(X,\theta)$, and $\mathrm{Rot}(Z,\theta)$. Each entry in the table is then just the worked-out product of the three matrices specified -- for example, ZXZ is the matrix product $\mathrm{Rot}(Z,\theta_1)\mathrm{Rot}(X,\theta_2)\mathrm{Rot}(Z,\theta_3)$.
Joint probability of a number of outcomes of a continuous random variable
The key criteria is whether or not the random variables are mutually independent. If they are so, then the product rule for mutually independent events applies. &nbsp; Whether or not the random variables are continuous or discrete, the joint cumulative distribution function of several mutually independent random variables is the product of their marginal cumulative distribution functions. $$\mathsf P(Y_1\leq y_1, Y_2\leq y_2,Y_3\leq y_3) ~=~ \mathsf P(Y_1\leq y_1)~\mathsf P(Y_2\leq y_2~)\mathsf P(Y_3\leq y_3)$$ If these random variables are also each continuous&ndash;have probability density functions&ndash;then the analogous rule applies: $$f_{Y_1,Y_2,Y_3}(y_1,y_2,y_3)~=~f_{Y_1}(y_1)~f_{Y_2}(y_2)~f_{Y_3}(y_3)$$ But, as stated, that is all only applicable if we have mutual independence. &nbsp; Elsewise you have to use conditioning.
Sequence bounded away from $0$ and $2$
Bounded away from $b$ means that there is a nontrivial interval around $b$ such that the sequence never enters it. In particular, if a sequence is bounded away from $b$ then it cannot converge to $b$. More generally, $b$ cannot be a limit point of the sequence.
Range of a parabolic shot
Using a CAS, one realizes that your solution doesn't work: in fact it is well known that, in this case, one must proceed implicitly. You start from the cartesian equation of the trajectory$$y=h+x \tan \alpha-x^2 \frac {g\,(1+\tan^2 \alpha)}{2v^2}$$The non-zero $x$-solution of the equation $y=0$ gives the range of the projection. Because $y=0$ defines (implicitly) the range $x$ as a function $x(\alpha)$, differentiate the equation with respect to $\alpha$ and solve with respect to $x'(\alpha)$ putting this equal to zero to obtain the stationary value of $x(\alpha)$. You have $$g\,x\tan\alpha-v^2=0$$Then couple the latter with $y=0$. You find $$x=\frac vg \sqrt {v^2+2gh}$$$$\tan\alpha=\frac v{\sqrt {v^2+2gh}}$$ Using the relation $$\cos {2\alpha}=\frac {1-\tan^2 \alpha}{1+\tan^2 \alpha}$$you obtain the condition.
How to solve the recurrence T(n) = 4 T (n/2) + n! using master theorem
The version of the master theorem you state cannot handle the recurrence $$ T(n) = 4T(n/2) + n!$$ as $n!$ does not satisfy a polynomial growth bound. In general, the $n!$ is so much bigger than the polynomial rest that it completely dominates, so one will have that $T(n) = \Theta(n!)$. I'll note that more precise statements of the master theorem handle this. This is for example within case 3 of the Wikipedia version of the theorem.
What really means when the teacher asks me to iterate on Gauss-Seidel considering a maximum variation of 0.01?
Don't think so. In real life, you never know what the true answer to your problem is (call it $x^*$ for convenience). In practice, you use deviation from the previous iterate as a stopping criterion. So instead of computing error as $|x_n - x^*|$, use $|x_n - x_{n-1}|$.
In how many ways can ww choose $10$ cards so there are $3$ exact matches?
There are $\binom{10}{3}$ ways to pick the cards that will make up the three matching pairs. That is a component of your answer, so I will assume that part is clear to you. Now we count the number of ways to pick the remaining $4$ cards. There remain $7$ "couples." We pick $4$ of these, and for each couple we choose the colour that will be used. That can be done in $\binom{7}{4}2^4$ ways, for a total of $\binom{10}{3}\binom{7}{4}2^4$. Remark: About your answer, the issue is about the count of "bad" choices, where there are $4$ or more matches. This number is not $\binom{10}{4}\binom{12}{2}$. That expression "double-counts" the $5$ matches situations. For the $\binom{10}{4}$ part counts, among others, the situations where we picked each of the couples $1,2,3,4$, while the $\binom{12}{2}$ counts among others the situations where we picked couple $5$. But $\binom{10}{4}$ also includes picking couples $1,2,3,5$, and $\binom{12}{2}$ counts among others picking couple $4$. Thus picking the $5$ couples has been double-counted, indeed multiple-counted.
Constrained curve fitting
Bin your data! For instance, take $\{(x_i,y_i) | x_i \in [0.2,0.3]\}$ and call this the data chunk for that interval. You will clearly have 10 such intervals, and counting the frequency of $y_i$ in each interval gives you a local estimate of probability. So for instance, the above subset would give a value for $f(0.25)$ which I could use in fitting to $f$. Of course, how many intervals you divide into depends on the amount of data, and how quickly $f$ is supposed to vary. A good rule of thumb would be -- if you have $n$ data points, divide into $\sqrt{n}$ bins of $\sqrt{n}$ points each. Plotting your fitted curve $f$ over these "averaged" points will probably give you a good idea how well you're estimating the local probability. But note that you can also find a fitting curve $f$ by just doing a normal least-squares fit to the $\{0,1\}$ data: least squares will actually weight it appropriately to try to get the right probability. Although, note that least squares will only be faithful if you give it enough degrees of freedom to fit the data. For instance: If the true function $f$ has $f(x) = 0$ for all $x \in [0,0.5)$ and $f(x) = 1$ for all $x \in [0.5,1)$, and you try to fit a linear function, you'll get something like $f_0(x) = 3x/2 - 1/4$. Which will the data decently well, but give negative probabilities for $x &lt; 1/6$. So make sure to use a sufficiently flexible model (or equivalently, a model with the constraints to have $f \in [0,1]$ everywhere).
How many such rearrangements are.
The problem with your analysis is that the $4$ ways that the first place can be filled are of two different types. If we fill with an H, then the remaining $3$ slots can be filled in $3!$ ways. For the other $3$ ways of filling the first place, we must place the H in fifth place. Then the remaining two slots can be filled in $2!$ ways, giving $(3)(2!)$. Now add the contributions from the two types. Remark: As has been pointed out elsewhere, it is often more efficient to take care of fussy people like H first. That way, we don't end up with two different types. If for each of the $a$ ways of carrying out Task 1, there are $b$ ways to carry out Task 2, then Tasks 1 and 2 can be dine in $ab$ ways. But if you cannot use the word "each" then multiplication is not the appropriate tool. In filling the last place using your method, we certainly do not have $2$ ways to fill fifth place for each of the legal ways of filling the first four places.
G is a group and contained within it are two subgroups H and K. Prove/Disprove that H intersect K is a subgroup.
Your "Proving" header seems to be more of a sketch than an actual proof (maybe this was intended). In that case, to show that $H \cap K$ is a subgroup, you want to show three things: closure under multiplication, closure under inverses, and nonemptiness. The existence of a unique identity will follow from the first two (and is often a good way of showing the third). Arguments of the form in your "Showing closure" heading are what you want to be using to prove these properties. For those arguments, rather than stating your arguments "if $a,b \in H$ and $a,b \in K$ satisfy given properties then $a,b \in H \cap K$ satisfy given properties," it would be more clear to state them in the following form: Let $a,b \in H \cap K$. Then $a,b \in H$ and $a,b \in K$. Since $H$ is a group, we have that $a\cdot b \in H$ and since $K$ is a group, we have that $a \cdot b \in K$. Thus $a \cdot b \in H \cap K$. Closure under inverses can be proved similarly. For nonemptiness, maybe my hint earlier was enough, but if it wasn't: Show that the identity is in $H \cap K$.
Combining two or more probabilities
In sports simulations, there is a standard way to combine win probabilities for head-to-head contests in which no tie is possible, even when the contestants have not faced off previously. It should be emphasized that this is heuristic only; there is no first-principles validation for this approach, because the situation is underdetermined. Essentially, suppose that $A$'s winning probability is $p$, and $B$'s winning probability is $q$. Then we write that $$ P(\text{$A$ beats $B$}) = \frac{p(1-q)}{p(1-q)+q(1-p)} $$ and $$ P(\text{$B$ beats $A$}) = \frac{q(1-p)}{p(1-q)+q(1-p)} $$ Your simulation happens to yield these same probabilities, but as I said, that's not provably correct.
$T^2$ and $N(T^2)$: What are they?
$T^2=T\circ T$ sends a vector $\vec v$ to the vector $T(T(\vec v))$. $N(T^2)$ is the set of vectors $\vec v$ of $V$ that $T^2$ sends to $\vec 0$, actually a subspace of $V$.
reverse of a summation formula
Notice that $\sum_{i=1}^n x^i = x + x^2+ \cdots + x^n$ is a sum of a geometric progression, and we know a closed expression for it: $$\sum_{i=1}^n x^i = \frac{x(1-x^n)}{1-x}.$$ If you have a result, $a$, and $x$ is known, then we have: $$\frac{x(1-x^n)}{1-x} = a \implies n = \log_x\left(1-\frac{a(1-x)}{x}\right).$$ If the result is really possible, then the RHS will be a natural number. In general, if we have a geometric progression $(a_1,\ldots, a_n, \ldots)$, with $q = a_{n+1}/a_n$, then: $$S_n = a_1\frac{1-q^n}{1-q}.$$