title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Linear algebra: Dimension of column space | $m$ is the dimension of the null space or $\ker A$. $m < n$ means that it is less than the matrix size $n$.
The rank-nullity theorem says
$$
\dim \ker A + \dim \mbox{im} A = n
$$
The image of $A$ is the set of all linear combinations of the column vectors of $A$, which is the column space.
So the answer is $n - m$. |
Is there a way to find out how many times an image is covered? | There is no general method which would not depend on what $f$ and $g$ are ; indeed, given any number of finite points $P_1,...,P_n \in \mathbb{R}^2$, together with integers $k_1,...,k_n$ greater than $1$, you can find two polynomial functions $x,y$ such that the curve $(x(t),y(t))$ will go through $P_i$ exactly $k_i$ times for all $i \in \{1,...,n\}$. To see that, you just need to use Lagrange interpolation on the coordinates $(x_i,y_i)$ of each $P_i$.
EDIT : I did not precise it before, but you can also decide on which time $t$ you will have $(x(t),y(t)) = P_i$ |
How many integral triplets ($x, y, z$) satisfy the equation $x^2 + y^2 + z^2 = 1855$? | Honestly, it comes from experience with such proofs. When dealing with sums of two squares, it's typical to work modulo $4$. With sums of three squares, every residue class modulo $4$ is accessible, so we typically work modulo $8$ to eliminate cases.
You should check out the proofs of which integers can be written as sums of $2$ and $3$ squares. You'll find $\bmod 4$ and $\bmod 8$ reasoning all over the place. |
How to determine the existence of certain common eigenvectors in a simpler way? | Partial answer.
(I) The cases $(1,-1)$ and $(-1,1)$ don't occur.
To see this note that if $w$ is a $\lambda_i$-eigenvector of $T_i$ for $i=1,2$ then $w$ is a $\lambda_1 \lambda_2$-eigenvector of $S=T_1 T_2$. But $S:x\otimes y\otimes z\mapsto z\otimes x\otimes y$ has order $3$ so its eigenvalues lie in $\{1,\omega,\omega^2\}$ (where $\omega$ is a primitive cube root of unity).
(II) Provided $\dim V>2$ the case $(-1,-1)$ does occur.
To see this note that any common eigenvector must be a $1$-eigenvector of $S$. That suggests where we should look for these common eigenvectors.
The $1$-eigenspace of $S$ is spanned by vectors of the forms
(a) $e_i\otimes e_i\otimes e_i$ ;
(b) $e_i\otimes e_i\otimes e_j+ e_i\otimes e_j\otimes e_i+e_j\otimes e_i\otimes e_i$ ;
(c) $v_{(i,j,k)}:=e_i\otimes e_j\otimes e_k+ e_j\otimes e_k\otimes e_i+e_k\otimes e_i\otimes e_j$;
where the indices are distinct.
It can now be checked that $v_{(i,j,k)}-v_{(i,k,j)}$ is a common eigenvector of $T_1$ and $T_2$ both having eigenvalue $-1$.
(III) It would be good to list the triples $(a_1, a_2, w)$ with $T_i w= a_i w$ but this I have not done.
More Complete Answer (For those who know some character theory.)
What we have here is a representation of the group $\langle T_1, S=T_1 T_2\rangle \simeq \langle t, s \mid t^2=s^3=(ts)^2=1\rangle\simeq S_3$ on the space $W=V\otimes V\otimes V$ where $\dim V=n$. Call its character $\psi$.
The character table of $S_3$ is
$$
\begin{array}[l| c c c]
\ & 1 & s & t\\
\iota & 1 & 1 &1\\
\sigma & 1 & 1 & -1\\
\theta & 2& -1 & 0\\
\end{array}.
$$
We can easily calculate the traces $\psi(I)$, $\psi(S)$, $\psi(T_1)$: as these matrices act by permuting the basis elements $e_i\otimes e_j\otimes e_k$ of $W$ we need only count how many are fixed. We get at once that $\psi(I)=n^3$, $\psi(S)=n$, and $\psi(T_1)=n^2$. It is then easy to see how $\psi$ decomposes, we must have
$$
\psi ={n+2 \choose 3}\iota +
{n \choose 3}\sigma +
2{n+1 \choose 3}\theta.
$$
That means that $W$ decomposes into ${n+2 \choose 3}$ $1$ dimensional subspaces on which both $T_1$ and $T_2$ act as multiplication by $+1$; ${n \choose 3}$ $1$ dimensional subspaces on which both $T_1$ and $T_2$ act as as multiplication by $+1$; and $2{n+1 \choose 3}$ $2$ dimensional subspaces where there are no common eigenvectors. |
Proving a topology is not induced by a metric | Let $X$ be a topological space. We say that its topology is induced by a metric if there exists a metric on $X$ such that the set of open balls under that metric is a basis for the topology. |
Is $\sum k(f_k(0)-f_{k-1}(0))$ infinite? | So here is the answer:
Note that $1-f(x)=1-x-\frac13(1-x)^2$
$$\implies \begin{aligned}\frac1{1-f(x)} =\frac{1}{(1-x)(1-\frac13(1-x))} & =\frac1{1-x}[1+\frac13(1-x)+O((1-x)^2)]\\ & =\frac1{1-x}+\frac13+O(1-x)\end{aligned}$$
Therefore
$$\frac1{1-f_{k+1}(0)}=\frac1{1-f_k(0)}+\frac13+O(1-f_k(0))$$
Therefore adding the above equalities for each $k$ we get
$$\frac1{1-f_n(0)}=1+\frac{n}{3}+\sum_{i=0}^{n-1} O(1-f_i(0))$$
Now note that RHS is $O(n)$, hence $1-f_n(0)$ is $O(\frac1n)$. Hence
$\displaystyle \sum_{i=1}^{n-1} O(1-f_i(0))$ is $O(\log n)$.
Hence
$$\lim_{n\to \infty} \frac{n(1-f_n(0))}{3}=1$$
This implies $1-f_n(0) \sim \frac3n$
Then $a_k =f_k(0)-f_{k-1}(0) \sim \frac{3}{k^2}$. Hence $\sum ka_k$ diverges! |
Evaluate $\lim \limits_{n\to\infty} ne^{-nx^2}$ | Squeeze theorem for $\;x\neq 0\;$ , otherwise limit is infinity:
$$0\le\frac n{e^{nx^2}}=\frac n{1+nx^2+\frac{n^2x^4}2+\ldots}\le\frac n{\frac{n^2x^4}2}\longrightarrow0$$ |
A Triangle and Intersecting Segments | Hint :
$1>$ Check the definition of the centroid of a Triangle
$2>$ Check in what ratio the cetroid divies the lines (in question) . |
Proof of diagonalizable linear tranfromation | Hint: Compute $T^2{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}$. |
Does this system have a root? | It seems that the setting is the following. One considers some positive real numbers $c_i$ and some independent random variables $X_i$, $Y_i$, $Z_i$, all exponential, where the common parameter of $X_i$, $Y_i$ and $Z_i$ is some positive $\lambda_i$. Let $\lambda=(\lambda_i)_i$ and $S=X_1+X_2+\cdots+X_N$. For every $x=(x_i)_i$ with positive entries, and every $i$, let $\hat x_i=(x_j)_{j\ne i}$ and
$$
A_i(\hat x_i)=[\forall j\ne i, X_j\geqslant\max\{Y_j,Z_j,x_j\}].
$$
The question is to find some $x=(x_i)_i$ with positive entries such that $f_i(\lambda,x)=c_i$ for every $1\leqslant i\leqslant N$, where
$$
f_i(\lambda,x)=E\left[\log\left(1+\frac{x_i}{S-X_i}\right);A_i(\hat x_i)\right].
$$
To solve this system explicitely seems hopeless in the general case. To show that some solution exists (or not), one could study the functions $f_i(\lambda,\ )$ on the boundary of their domain. And maybe study in depth the case $N=2$... |
A vector field without a stationary point or a limit cycle | Let $U$ be the annulus $1<x^2+y^2<9$. Draw the spiral $$\gamma:\quad{\mathbb R}\to U,\qquad t\mapsto r(t)\bigl(\cos t,\sin t\bigr)\ ,$$ whereby $t\mapsto r(t):=2+\tanh t$ is monotonically increasing from $1$ to $3$ during the infinitely many turns at both ends. Draw unit tangent vectors along $\gamma$, and create a rotationally symmetric vector field $X$ on $U$ coinciding with the drawn tangent vectors along $\gamma$. Then all solution curves are spirals congruent to $\gamma$. |
Properties of matrices, including span , vectors and products | You need to go back to basics. Both your assertions are false more often than they are true.
The product of a $1 \times n$ and an $n \times 1$ matrix is a $1 \times 1$ matrix.
The vectors $(0,0,0)$, $(1, 0, 0)$, $(2,0,0)$, $(3, 0, 0)$ do not span $\mathbb{R}^3$.
You should now see why your third assertion is false. |
Radius of convergence of $\sum_n \frac{(-1)^nx^n}{\ln(n+1)}$ | You can apply the ratio test. Find $L = \lim_{n \to \infty} | \frac{(-1)^{n+1} x^{n+1} \ln (n+1)}{(-1)^n x^n \ln (n+2)}| = |x| \lim_{n \to \infty} \frac{\ln (n+1)}{\ln (n+2)} $. Apply L'Hospital's rule to get $L = |x|$. Thus the series will absolutely converge if $|x| <1$. |
Orthogonal Projections Are Symmetric - Geometric Intuition | Here's an idea you might like. Suppose that $P$ is a projection matrix. That is, $P^2 = P$ but $P$ is not necessarily symmetric. In other words, $P$ is "the projection onto $\operatorname{im}(P)$ along $\ker(P)$". The transpose $P^T$ is another projection; you can verify (by various means) that $P^T$ represents "the projection onto $\ker (P)^\perp$ along $\operatorname{im}(P)^\perp$."
The only time these projections are the same is when $\ker (P) = \operatorname{im}(P)^\perp$. That is, $P$ and $P^T$ can only be the same if $P$ is the orthogonal projection onto $\operatorname{im}(P)$.
We can see that $Q = P^T$ must be the projection onto $\ker (P)^\perp$ along $\operatorname{im}(P)^\perp$ as follows. Note that any vector $x$ can be decomposed into $x = x_{im} + x_{\ker}$, where $x_{im} = Px$ and $x_{\ker} = x-Px$.
For any vector $y \in \ker(P)^\perp$ and any $x$, we have
$$
\langle y, x\rangle = \langle y, x_{im} + x_{\ker} \rangle = \langle y, x_{im} \rangle = \langle y, Px \rangle = \langle Qy,x \rangle.
$$
So, $Qy = y$. Similarly, we can show that $Qy = 0$ for $y \in \operatorname{im}(P)^\perp$. |
Convergence in $L^p$ and $L^q$ | a) Your idea is good. Now apply Fatou's lemma to the sequence $\left(f_{n_k} \right)$. For the second part, use the pointwise inequalities
\begin{align}
\left|f_n-f\right|^q&\leqslant R^{q-1}\left|f_n-f\right| +\left|f_n-f\right|^q\mathbf 1\left\{\left|f_n-f\right|\gt R\right\} \\
&=R^{q-1}\left|f_n-f\right| +\left|f_n-f\right|^p\left|f_n-f\right|^{q- p} \mathbf 1\left\{\left|f_n-f\right|\gt R\right\}\\
&\leqslant R^{q-1}\left|f_n-f\right| +\left|f_n-f\right|^pR^{q- p} \\
&\leqslant R^{q-1}\left|f_n-f\right| +2^{p-1}\left(\left|f_n\right|^p+\left|f\right|^p\right)R^{q- p} ,
\end{align}
then integrate.
b) Indeed, we can use Egoroff's theorem: fix $\varepsilon$ and consider $F$ of measure smaller than $\varepsilon$ and $f_n\to f$ uniformly on $E\setminus F$. Then use Hölder's inequality with the exponent $p/q$ and its conjugate to control $\int_F\left|f_n-f\right|^q$. |
How to find all polynomials satisfying $P(x^2+x-4)=P^2(x)+P(x)$? | For $x=2$ the relation gives $\require{cancel}\,\bcancel{P(2)}=P^2(2)+\bcancel{P(2)}\,$, therefore $P(2)=0\,$. Assuming $P \not \equiv 0$, there must exist a multiplicity $n \ge 1$ and some polynomial $Q(x)$ such that $P(x)=(x-2)^n\cdot Q(x)$ and $Q(2) \ne 0$. Substituting back into the relation:
$$
\left(x^2+x-6\right)^n \cdot Q(x^2+x-4)=(x-2)^{2n} \cdot Q^2(x)+(x-2)^n \cdot Q(x)
$$
Since $x^2+x-6=(x-2)(x+3)\,$:
$$
\bcancel{(x-2)^n} \cdot (x+3)^n \cdot Q(x^2+x-4)=\bcancel{(x-2)^n} \cdot (x-2)^n \cdot Q^2(x)+\bcancel{(x-2)^n} \cdot Q(x) \\[5px]
\iff (x+3)^n \cdot Q(x^2+x-4)=(x-2)^n \cdot Q^2(x)+Q(x)
$$
For $x=2\,$, the above reduces to $5^n \cdot Q(2) = Q(2)\,$, but the equality cannot hold for $n \ge 1$ and $Q(2) \ne 0\,$. Therefore, the only solution is the zero polynomial $P \equiv 0\,$. |
Show that $\DeclareMathOperator{im}{Im} \im(\alpha) \cap \im(\beta)={0_v}=\ker(\alpha) \cap \ker(\beta)$ | $\DeclareMathOperator{\image}{im}$
Hint: Let $n = \dim(V)$.
Since $\image(\alpha) + \image(\beta) = V$, we may state that:
$\dim(\image(\alpha)) + \dim(\ker(\alpha)) = n$
$\dim(\image(\beta)) + \dim(\ker(\beta)) = n$
$\dim(\image(\alpha)) + \dim(\image(\beta)) \geq n$
$\dim(\ker(\alpha)) + \dim(\ker(\beta)) \geq n$
You can use this to conclude that
$$
\dim(\ker(\alpha)) + \dim(\ker(\beta)) = \dim(\image(\beta)) + \dim(\ker(\beta)) = n
$$
from which you may directly deduce the desired conclusion by noting that for subspaces $A,B \subset V$, we have
$$
\dim(A + B) \leq \dim(A) + \dim(B)
$$
with equality if and only if $A \cap B = \{\vec 0\}$. |
Where does $\pi$ of the class number formula come from? | (NB : This was meant to be a comment, not an answer, but it became rapidly too lengthy)
It seems to me that the point is not the appearence of $\pi$ in the class number formula (after all, this could be considered as a technical by-product of the proof), but the deeply mysterious way in which this formula binds together an algebraic object (the class group) and an analytic object (the $\zeta$- function) by means of a transcendental determinant (the regulator). I don't quite agree with @Mathmo123 that Tate's thesis really explained why this happens. Actually, Tate resolutely adopted the global-local point of view by defining generalized $\zeta$- functions as integrals over the idèle group of certain weight functions, which allowed him to apply the full force of abstract Fourier analysis in locally compact abelian groups to establish at one stroke an analytic continuation and a functional equation. This idelic approach illustrates the power of the global-local principle in number theory, which consists in putting the the $p$-adic worlds on an equal footing with the archimedean world . But it remains to explain why this principle works so well.
Coming back to the OP question, it is natural to turn it around and wonder why the class number $h$ pops up in a formula giving the residue of $\zeta_F$ at $s=1$. Actually the powers of $\pi$ can be cancelled by applying the functional equation, which yields the formula $\zeta_F(0)^*=-Rh/w $ for the special value $\zeta_F(0)^* :=$ the first non zero coefficient in the Taylor expansion of $\zeta_F$ at $s=0$. Here $w$ is the order of the group of roots of unity contained in $F$. Not only the special value at $0$ looks simpler, but the rational number $h/w$ makes you want some of the same at all the negative values $s=-n, n\in \mathbf N$. Hints are given by the special values of the Riemann $\zeta$-function. It is classically known that : $\zeta(0)=-1/2, \zeta (1-2m)=-B_{2m}/2m$, where $B_k$ is the $k$-th Bernoulli number, $-2m$ is a simple zero, $\zeta(-2m)^*= (?)$ (the mystery is the same for $\zeta(2m+1)$). Number theorists have a mannerism : when facing a rational number, they inevitably ask whether the numerator and the denominator could be the orders of some finite groups . Astonishingly, this is the case. At the beginning of the 1970's, Lichtenbaum proposed the following conjecture (some jargon is inevitable here) : for any number field $F$, $\zeta_F(1-m)^*=\pm 2^? R_m\mid K_{2m-2}O_F\mid/\mid tors K_{2m-1}O_F\mid$ for any $m \ge 2$. Notations : the exponent (?) can be made precise; $R_m$ is the Borel-Beilinson regulator, an elaborate generalization of the Dedekind regulator $R$ (see below); the Quillen groups $K_{i}O_F$ are topological objects attached to the ring of integers $O_F$. For some more details, see e.g. [BK].
The Lichtenbaum conjecture is now a theorem when $F$ is an abelian field. But the proof was made possible only inside the (partly conjectural) framework of the Bloch-Kato conjectures on the special values of "motivic L-functions".
The so called search for "motives" goes back to an original idea of Grothendieck, which one could find far-fetched but which I prefer to call "platonist". Remember Plato's "apologue of the cavern": we humans live in a cave, and the physical reality which we perceive consists in shadows cast on the walls by the sun in our backs; studying these shadows could occupy a lifetime, but to understand the true "reality", we must turn around and face the archetype which projects these shadows. Grothendieck applied this philosophical concept to algebraic/arithmetic geometry : around a given variety are floating a host of dissimilar cohomologies (Betti, de Rahm, étale...), which become isomorphic when passing to an algebraic closure, but such a passage destroys all the arithmetical properties we are interested in. Following Plato, Grothendieck suggested to look not at the shadows but at the archetype, the conjectural motivic cohomology, which would cast its shadows by means of regulator maps. this was achieved at the beginning of this century by Voevodsky, giving in particular a precise relationship beween K-theory and Galois/étale cohomology via Chern clas maps (=the regulator maps, whose determinants are the regulator numbers which appear in the special values).
To summarize : the blend of algebra-analysis-topology in the special values of $\zeta$ certainly remains mysterious, but it is natural, in that it actually reveals the profound unity of mathematics. There is a legend (?) about the last words of Hermite. On his death bed he would have said : "Now I'll be able to see Zeta face to face".
[BK] "The Bloch-Kato Conjecture for the Riemann Zeta Function", Proceedings of the 2012 Pune conference, edited by John Coates & al., London Math. Soc. LNS 418,2015 |
Does this polynomial belong to this ideal? | Your idea that $f \notin (x^2+1) = I$ is correct. If $f \in (x^2+1)$, then $f(x)-(x^2+1) = x$ would also be in $I$.
But the degrees of non constant polynomials of $I$ are at least equal to $2$. A contradiction for $x$.
Therefore $f \notin I$. |
Deducing a result about entire functions | Let $f(z)=\sum_{n\geq 0}a_nz^n$ the power series expansion of $f$ with infinite radius of convergence. Then, for $r\geq 0$ fixed, the function
$$
g:\theta\longmapsto f(re^{i\theta})=\sum_{n\geq 0}a_nr^ne^{in\theta}
$$
is a Fourier series which converges normally as $\sum_{n\geq 0}|a_n|r^n$ converges. In particular, $g$ is $2\pi$-periodic and continuous, whence in $L^2(0,2\pi)$. So the equality you want is Parseval. |
Square of antisymmetric matrix is symmetric and negative definite | $$(M^2)^T=(M^T)^2=(-M)^2=M^2$$
hence $M^2$ is symmetric
Let $x\in \mathbb{R}^n$,
$$(x,M^2x)=x^TM^2x=x^T(-M^T)Mx=-x^TM^TMx=-\|Mx\|_2^2 \le0$$
hence $M^2$ is negative semidefinite.
You need additional conditions to prove that $M^2$ is negative definite. As stated, $M$ could, for example, be the null matrix. |
How to determine behaviour of this derivative in the following differential equation? | The "slope field" defined by a differential equation of the form $$\dot x=f(x)\tag{1}$$ in the $(t,x)$-plane is horizontally invariant: All "infinitesimal curve elements" on a horizontal line $x={\rm const.}$ have the same slope. It therefore suffices to draw these curve elements on the $x$-axis, i.e. on the vertical axis $t=0$. The slope of the curve element at $(0,x)$ is given by the value $f(x)$, and may change its sign at points $(0,x_k)$ where $f(x_k)=0$. When $f(x_k)=0$ the prescribed slope is $=0$, and this means that $x(t)\equiv x_k$ is a solution of $(1)$. In an interval $x_{k-1}<x<x_k$ the sign of the slope is constant, and as different solutions cannot cross it follows that in this strip the solutions behave qualitatively like $\tanh$, or $-\tanh$, depending on the sign of $f(x)$.
For the example at hand we have to look at the graph of $f(x)=ax+\cos x$ for given $a\in[-0.1,0.1]$. The function $f$ has a finite number of zeros. For most values of $a$ all of these zeros are simple (i.e., $f$ changes sign there), and for some "special" values of $a$ one of the zeros is a double zero, and $f$ does not change sign there.
These explanations should allow you to draw the required figures for various $a$'s in the given interval. |
How to show that the sequence $a^{1/n}$ converges to $1$ when $n \to \infty$? (using the epsilon-delta definition) | We may assume that $a\geq 1$ (if $0<a<1$ replace $a$ with $1/a$). Let $x=a-1\geq 0$ and use the Bernoulli inequality,
$$1\leq a=(1+x)^{1/n}\leq 1+\frac{x}{n}=1+\frac{a-1}{n}.$$
Hence
$$|a^{1/n}-1|\leq \frac{a-1}{n}<\epsilon$$
as soon as $n>\frac{a-1}{\epsilon}$. |
Prove that the set of all algebraic numbers is countable: proof using fundamental theorem of algebra | It's essentially correct, but it can be simplified.
For every $t=(a_0,a_1,\dots,a_n)\in T_n=\mathbb{Z}^{n+1}\setminus\{(0,\dots,0)\}$, consider
$$
R(t)=\{z\in\mathbb{C}:a_0+a_1z+\dots+a_nz^n=0\}
$$
Then the set $R(t)$ is finite and consists of algebraic numbers. Since every algebraic number belongs to $R(t)$ for some $t\in T_n$ and some $n>0$, the set of algebraic numbers is
$$
\bigcup_{n>0}\,\bigcup_{t\in T_n}R(t)
$$
For every $n$, the set
$$
A_n=\bigcup_{t\in T_n}R(t)
$$
is the countable union of finite sets, so it's countable. Therefore the set $A$ of algebraic numbers is
$$
A=\bigcup_{n>0}A_n
$$
hence a countable union of countable sets.
Note: $A_n$ might in principle be finite (actually it isn't), but this wouldn't invalidate the proof. Read “countable” as “finite or countable”. |
Rational functions on algebraic curves | Your definition of a rational function is just fine for where you're at and your function exactly fits it. To see this, note that you need to verify that $f=y$ and $g=x$ are elements of $K[x,y]/(F)$ and that $g\neq 0$ (as a function). But this is clear. |
Zero to the negative power. | Traditionally, $x^{-k}$ denotes the multiplicative inverse of $x$, $x^{-1}$, raised to the $k^{th}$ power. For instance, in $\mathbf{R}$, we have that the inverse of $2$ is $\frac{1}{2}$ so that $2^{-1}=\frac{1}{2}$. Thus, $2^{-k}=(\frac{1}{2})^k=\frac{1}{2^k}.$ In the case of $0$, there is no multiplicative inverse in $\mathbf{R}$ (or in any field, for that matter), so that the symbol $0^{-1}$ makes no sense. For instance, can you find a real number $r$ so that $r\cdot 0=1$? The answer is no.
In short, $0^{-1}$ does not make sense without further interpretation. |
circular differentiation | Your confusion likely stems from using the symbol $f$ to represent two different functions. Let $y = f( \mu, \sigma^2)$ and $y = g( \psi, \sigma^2)$. Notice that it's the same quantity $y$, but the functions are different. Now, if we name one more function to convert between $\mu$ and $\psi$, namely
$$
\psi = h(\mu, \sigma) = e^{\mu + \frac{\sigma^2}{2}},
$$
then we have the composition $f = g \circ h$. The chain rule now applies, in Leibniz notation, where products of derivatives look like "canceling fractions:"
$$
\begin{align}
\frac{dy}{d\mu} &= \frac{dy}{d\psi} \cdot \frac{d\psi}{d\mu} + \frac{dy}{d\sigma} \cdot \frac{d\sigma}{d\mu} \\
&= \frac{dy}{d\psi} \cdot \frac{d\psi}{d\mu},
\end{align}
$$
assuming that $\sigma$ doesn't depend on $\mu$. |
Sequence of entire function that converges uniformly over on sets with empty interior | If $A$ is a subset of $\Bbb C$ with nonempty interior, then there exists $z\in A$ with $z=x+i\,y$, $y\ne0$. Then
$$
\sin(n\,z)=\sin(n\,x)\cosh(n\,y)+i\cos(n\,x)\sinh(n\,y).
$$
It is now easy to see that this sequence is unbounded. |
Combinations of consecutive digits | Think of 4,6,8 as a single entity. That leaves you with 5 objects to be rearranged in 5! ways. Of course, 4,6,8 can also be rearrange in 3! ways. |
solving diophantine equations using congruences | Here's the argument outlined in equational logic (vs. less precise English language). No use is made of $\,4x-51y = 9.\,$ The elimination steps use $E_z\!:\,z = c,\, az = b\iff z=c,\, ac = b,\,$ familiar as an elementary row operation in Gaussian elimination (triangularization) in linear algebra.
$$\begin{align}
4x+51y\, =\, 9\, \ \ \ \ \ \ \ \ \ \ \ \ \ & \\[.2em]
\iff\ 4x+51y\, =\, 9\, \ \ \ \ \ \ \ \ \&\ \ &x = 15+51t\ \ \&\ \ y = 3+4s\\[.2em]
\iff 204(t+s) = -204\ \ \&\ \ &x = 15+51t\ \ \&\ \ y = 3+4s,\ \ \ \ {\rm by}\ \ E_x,\,E_y\\[.2em]
\iff\ \ \ \ \ \ \ \ \ t+s = -1\ \ \ \ \ \ \&\ \ &x = 15+51t\ \ \&\ \ y = 3+4s,\ \ \ \ {\rm by\ cancel}\ 204\\[.2em]
\iff \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \phantom{t+s = -1\ \&}\ &x = 15+51t\ \ \&\ \ y = -1-4t,\ \ {\rm by}\ \ E_s \
\end{align}$$
What you did - solve one congruence first, then substitute - is generally simpler and the more common way to proceed, i.e. in equational language as above
$$\begin{align}
4x+51y\, =\, 9\, \ \ \ \ \ \ \ \ \ \ \ & \\[.2em]
\iff\ 4x+51y\, =\, 9\, \ \ \ \ \ \ \&\ \ &x = 15+51t\\[.2em]
\iff 51y = -51\!-\!204t\ \ \&\ \ &x = 15+51t,\ \ \ {\rm by}\ \ E_x\\[.2em]
\iff\ \ \ \ \:\! y = -1-4t\ \ \ \ \ \ \&\ \ &x = 15+51t,\ \ \ {\rm by\ cancel}\ 51
\end{align}\qquad$$
Remark $ $ If instead of the above (bidirectional) equivalences we use only unidirectional arrows $(\Rightarrow)$ then we obtain only necessary conditions on the solutions. Then to verify sufficiency we have to check that the candidate solutions actually work (are not extraneous roots). See here for more on the insufficiency of unidirectional inferences. |
$\Delta ABC$ is right angled triangle. $AP$ and $AQ$ meet $BC$ and $BC$ produced in $P$ and $Q$ and are equally inclined to $AB$. | Hint: Finish off with the Sine Law.
From angle bisector theorem, $$\frac {CP}{CQ} = \frac {AP}{AQ}$$ The notations are different since I extended $BC$ towards $C$.
By Sine Law, $$\frac {AP}{\sin B} = \frac{BP}{\sin \angle PAB} = \frac {BP}{\sin (90^\circ - \angle PAC)}$$ $$\frac {AQ}{\sin B} = \frac{BQ}{\sin \angle QAB} = \frac {BQ}{\sin (90^\circ + \angle QAC)} = \frac {BQ}{\sin (90^\circ - \angle PAC)}$$
Hence $$\frac {AQ}{BQ} = \frac {AP}{BP}$$$$\leadsto\frac {AP}{AQ} = \frac {BP}{BQ} = \frac {CP}{CQ}$$ |
How many $4$ digit even numbers have all $4$ digits distinct? | On problems like this one, it is best to start counting with what has the most constraint. Here we need the last one even, and the first one to be non-zero. The two central ones don't have constraint aside being distint of everyone else.
We start by dividing the counting whether the last digit is $0$ or not.
If the number ends with a $0$ then there are $9$ choices for the first digit, $8$ for the second and $7$ for the third, which makes $1\times 9\times 8\times 7=504 $ possibilities.
If the number is even ending with something else than $0$ then there are $4$ choices for the last digit, $8$ choices for the first digit (no $0$ nor the last digit), $8$ for the second digit and $7$ for the third digit, which makes $4\times 8\times 8\times 7=1792$
Together, this gives $2296$ numbers with $4$ distinct digits that are even. Note that this does not allow leading $0$, as you see to want it based from the question. |
Proof that there is a shortest path between two specific nodes in a weighted, connected digraph which has no negative cycles | Some thoughts that may be too long for a comment: assuming the length of a path $e_1 \ldots e_n$ to be $W(e_1) + \cdots + W(e_n)$, fix $s,t \in V$ and consider $\mathcal P$ the set of (finite) and $\mathcal P_{s,t}$ the subset of paths from $s$ to $t$. We have a length function
$$
\ell \colon \mathcal P \to \Bbb R, \quad \ell(e_1\ldots e_n) = W(e_1) + \cdots + W(e_n)
$$
which is compatible with path concatenation, that is $\ell(\omega\omega') = \ell(\omega) + \ell(\omega')$.
A shortest path, thus, would be a path $\omega \in \mathcal P_{s,t}$ that minimizes $\ell$ when restricted to $\mathcal P_{s,t}$.
If we have a cycle $c$ connected to both $s$ and $t$ via paths $\omega_s,\omega_t$, then $\omega_s c^k \omega_t$ is also a path from $s$ to $t$ that goes trough $\omega_s$ then loops around $c$ for $k$ times, and then goes through $\omega_t$. By direct computation,
$$
\ell(\omega_s c^k \omega_t) = \ell(\omega_s) +k\ell(c)+\ell(\omega_s).
$$
Hence if the cycle $c$ is negative, by the previous formula the function $\ell$ takes arbitrarily large negative values in $\mathcal P_{s,t}$ and thus can't attain a minimum value.
Now, your task is to show the converse. Also, note that to be a problem we needed the aforementioned cycle to be connected to $s$ and $t$. If we have negative cycles in, say, a disjoint connected component from the one of $s$ and $t$, it has no impact on the cost of paths $s \to t$.
Edit: okay, some more thoughts for the converse. What I've come up with may be a bit cumbersome, I hope someone posts a clever proof soon. But anyway, here's my attempt:
Fix $\omega$ a path for $s$ to $t$. If $\omega = \omega' c \omega ''$ with $c$ a positive (or rather non-negative) cycle, then $\ell(\omega) \geq \ell(\omega' \omega'')$. Thus, to check wether $\ell$ attains a minimum value, we can restrict ourselves to paths without non-negative cycles.
But then, by hypothesis, the paths considered have no cycles. If your graph is finite, there are a finite amount of vertices and so without making a cycle there are finite options for a path from $s$ to $t$, i.e. the subset $S \subset \mathcal P_{s,t}$ of acyclic paths is finite. Hence $\ell$ attains a minimum in $S$, and by the previous observation this is a minimum in $\mathcal P_{s,t}$. |
Is there a triangulation of a closed surface with each vertex incident to $n\ge 7$ triangles? | There does always exist such a surface. There is a reasonably short proof using a big tool, namely Selberg's Lemma that every linear group has a torsion free subgroup of finite index.
Assuming equilateral triangles, the symmetry group of the tiling of the hyperbolic plane $\mathbb H^2$ that you depict is a reflection group. In particular, if you pick any tile, then subdivide it into 6 triangles each with angles $\pi/2$, $\pi/3$, $\pi/n$, and then take $T$ to be any one of those 6 triangles, the symmetry group is generated by reflections in the three sides of $T$. That group is a Coxeter group
$$C(2,3,n) = \langle a,b,c \mid a^2 = b^2 = c^2 = (ab)^2 = (bc)^3 = (ca)^n = \text{Id} \rangle
$$
Furthermore, $T$ is a fundamental domain for the action of $C(2,3,n)$. These conclusions all come from the Poincare Fundamental Polygon Theorem.
We can pass to an index 2 subgroup $R(2,3,n) < C(2,3,n)$ in which the sum of the exponents of $a,b,c$ in each word is even. One can show that $R(2,3,n)$ is generated by the rotations $ab$, $bc$ and $ca$ of order $2,3,n$ respectively, with defining relation $(ab)(bc)(ca) = \text{Id}$. The action of $R(2,3,n)$ on $\mathbb H^2$ therefore preserves orientation. The orientation preserving isometry group of the hyperbolic plane is a linear group, namely $SO(2,1)$. Selberg's Lemma therefore applies, yielding a torsion free finite index subgroup $\Gamma < R(2,3,n) < C(2,3,n)$. The quotient $\mathbb H^2 / \Gamma$ is the surface that you desire. |
Existence and uniqueness for initial value problem with $y'=1/(1+|y|)$ | Another way to check the Lipschitz condition of $F(y)=1/(1+|y|)$ is to use the derivative $F'(y)$. Granted, it does not exist at $0$. But since $F$ is even, we have
$$F(a)-F(b) = F(|a|)-F(|b|)$$
which reduces the consideration to nonnegative arguments. And there the mean value theorem only requires differentiability on $(0,\infty)$.
Since $|F'(y)|=1 /(1+y)^2\le 1$ for $y>0$, the function is Lipschitz with $L=1$. Global existence and uniqueness follow from the Picard theorem. |
Plotting an animated hypocycloid | You misunderstand the meaning of $x(\theta)$. The parentheses do not indicate multiplication; they instead indicate that $x$ is a function of the parameter $\theta$. To plot this curve for a given $R$, $r$, and $k$, all you need to do is plug in various values of $\theta$ into each equation on the RHS, and the first equation gives you the $x$-coordinate and the second gives you the $y$-coordinate. |
Is the space of positive linear functionals on a $C^{*}$-algebra $\mathcal{A}$ closed in the norm topology of $\mathcal{A}^{*}$? | The answer is positive if your $C^*$ algebra $A$ has a unit, say, $e$. In that case, there is a theorem asserting that a bounded functional is positive on $A$ if and only if its norm is attained at $e$.
Therefore, if $f_n\to f$ in norm, then $|f_n(e)-f(e)|\to 0$, and also $f_n(e)=\|f_n\|\to\|f\|$, hence $f(e)=\|f\|$ so $f$ is a positive functional. |
Need help with a proof concerning zero-free holomorphic functions. | If $f$ is zero free and it is defined in a simply-connected domain, you can define a logarithm of $f$ as
$$
r(z)=\int_{z_0}^z \frac{f'(w)\,dw}{f(w)}.
$$
This integral is univalent, as $f'/f$ is holomorphic in a simply-connected domain.
Clearly $f(z)=f(z_0)\exp\big(r(z)\big)$.
Then $h(z)=w_0\exp\big(r(z)/t\big)$, where $w_0^t=f(z_0)$. |
What does the notation $f\colon A\to B$ mean? | $f:A\to B$ means $f$ is a function from $A$ to $B$.
Example:
$\begin{align*}f:\Bbb R& \to \Bbb R_+\\
x & \mapsto x^2\end{align*}$
You've certainly already seen functions defined as $f(x)=x^2$ but as you start doing more complicated things with functions, you need the "formula" plus two other things: the domain $A$ and the codomain $B$.
The reason fo that is that the function I defined above is not bijective (if I give you $f(x)$, you can not find $x$ because it could be $\sqrt{f(x)}$ or $-\sqrt{f(x)}$) but I can define another function that is bijective:
$\begin{align}g:\Bbb R_+& \to\Bbb R_+\\
x & \mapsto x^2\end{align}$
Because now you know the $x$ I took to form $f(x)$ is in $\Bbb R_+$ so it can not be $-\sqrt{f(x)}$ so it has to be $\sqrt{f(x)}$.
Therefore you can define
$h:\begin{array}{ll}\Bbb R_+& \to& \Bbb R_+\\
x & \mapsto & \sqrt{x}\end{array}$
And $h$ will be the inverse function of $g$ which we write as $g^{-1}=h$. Also note that $f$ does not have an inverse function. |
How to prove b | a iff remainder = 0 | In the other direction, you have $b|a$, and you want to prove that the reminder $r$ is 0.
$b|a$, therefore $a = bq$ for some $q$.
On the other hand, $a = bp + r$ for some $p$ and some $0 \le r < b$.
Therefore, $bq = bp + r$, so $b(q-p) = r$.
If $q < p$, then $r < 0$, contradiction. On the other hand, if $q > p$, then $r > b$, again contradiction. So we have $q=p$, therefore $r=0$.
Also, if you have already proven the uniqueness of the reminder (which goes pretty much the same as what I have above), you can simply show that 0 works as reminder, so that must be it. |
When does a finite ring become a finite field? | HINT 1) If $\;\rm R\;$ is finite then $\;\rm x\to r\:x\;$ is onto iff 1-1, so $\;\rm R\;$ is a field iff $\;\rm R\;$ is a domain.
2) is Wedderburn's little theorem. |
left-continuity / right-continuity of two processes with cycling definition. | My guess for now is that it depends on the notation of $dN_s$. Depending on its definition, one has different meaning of the stochastic integral, and thus either left or right continuity. |
Solving $|x-1| + |x-2| \ge 4$ | If $x\le 1$ then $x-2<0$ so $|x-1|+|x-2|=-(x-1)-(x-2)=-2x+3$.
If $1\lt x\lt 2$ then $|x-1|+|x-2|=x-1-(x-2)=1$.
If $x\ge2$ then $x-1\gt0$ so $|x-1|+|x-2|=(x-1)+(x-2)=2x-3$.
Thus, $|x-1|+|x-2|\ge4$ when $x\le-\frac12$ or $x\ge\frac72$. |
Sketch phase portray of $ x'' + 3x^2 + - 3 = 0 $ | Write the Jacobian at the critical points ${\bf x}^*$
$$
J({\bf x}^*) = \pmatrix{0 & 1 \\ -6x^* & 0}
$$
with eigenvalues $\lambda^2 = -6x^*$. That means that for
${\bf x}_1^* = (+1,0)$ the eigenvalues are $\lambda_1^{\pm} = \pm i \sqrt{6}$
Solution is a cycle
${\bf x}_2^* = (1,0)$ the eigenvalues are $\lambda_2^{\pm} = \pm \sqrt{6}$
Solution is a saddle point. You can find the directions of the stable and unstable manifolds by calculating the eigenvectors. I will leave that part to you
Here's a sketch |
Show that if there exist two complex numbers $a,b$ such that $f(a)=a$ and $f(b)=b$ then $f(z)=z$ for all $z\in B(0,1)$. | Given $c$ with $|c| < 1$, you can easily prove that the map
$$ h_c(z) = \frac{z - c}{1 - \overline{c}z} $$
is a holomorphic automorphism of the open unit disc sending $c$ to $0$. The inverse of $h_c(z)$ is $h_{-c}$.
If $a$ or $b$ were $0$, the answer would follow from an immediate application of the Schwarz lemma. The hint offers you to reduce the problem to that case by using a holomorphic automorphism of $B(0,1)$ that sends $a$ to $0$. More specifically, you are offered to consider $g = h_a \circ f \circ h_{-a} = h_c \circ f \circ (h_a)^{-1}$ - the conjugation of $f$ by $h_a$. It follows that since $f$ maps the open unit disc into itself, that $|g(z)| < 1$ for all $z \in B(0,1)$. We also have
$$ g(0) = h_a(f(h_{-a}(0))) = h_a(f(a)) = h_a(a) = 0 $$
and
$$ g(h_a(b)) = h_a(f(h_a^{-1}(h_a(b))) = h_a(f(b)) = h_a(b). $$
Since $h_a(b) \neq 0$ (because $h_a$ is an automorphism and $h_a(a) = 0$), the Schwarz lemma implies that $g(z) = z$ for all $z \in B(0,1)$. The only map conjugate to the identify is the identity itself and so $f(z) = z$ for all $z \in B(0,1)$. |
Optimizing a winning strategy for a quick tabletop game | Here is a proof that $5$ is the minimum:
First, every turn consists of two 'moves': a 'choice-move', which is where you choose which two cups to remove, and a 'flip-move', where you flip any of the coins you get to see.
Second, note that there are only two possible 'choice-moves': you either choose two adjacent cups or two opposite cups ... and it is clear from the setup that it doesn't matter which of the four adjacent pairs or which of the two opposite pairs.
Third, there are only three kinds of coin-configurations where not all four coins face the same way:
A. three coins face the same way
B. two adjacent coins face the same way
C. two opposite coins face the same way
You yourself can be in one of several states as well which, as we will see, are listed here from 'best' to 'worst':
You know you are in situation C
You know you are in situation B
You know you are in situation A
You know two coins face the same way, but you don't know whether it's B or C
You don't know whether you are in situation A, B, or C
Let's consider these, in order from 'best' to 'worst':
if you know that you are in situation C, then the optimal strategy is of course to choose two opposite coins and flip them, and thus finish the game (and you obviously can't do better than that)
If you know you are in situation B, then the optimal strategy is to choose two adjacent coins: if they are the same, flip them both, and that will finish the game, and if they are not the same, then flip them both to get into situation C, and finish the game after that. It is clear that this is the best strategy for state 2, since choosing two opposite coins (which will have opposite sides) and then flipping no or two coins will make you stay in state 2, and flipping 1 coin makes you go to state 1 which, as we will see, is even worse.
If you know you are in situation A, then the optimal strategy is to choose two opposite coins. If you happen to know which side it is that is the odd one out, then if you see that one, flip it and you win (clearly can't do better). If you see two of the same faces, then flip one of the coins, which will get you into state 2 (can't do better, since flipping none or both will keep you in state 1), and thus two moves from finishing the game. Choosing two adjacent coins is not a good strategy when in state 1, since you might get to see two coins that are facing the same way, and thus flipping none or both keeps you in state 1, and since you do not know where the odd one out is located, flipping one coin gets you to state 4 which, as we will see, has a worst-case scenario of being 3 moves from finishing the game. In other words, the best strategy when in state 3 is to choose opposite coins, which with optimal game-play has a worst-case of finishing the game in 3 moves.
If you know that the coins are in configuration B or C, but you don't know which one, then the optimal strategy is to choose two opposite cups, since choosing two adjacent cups might reveal two coins that are not facing the same way, which would be compatible with both B and C, and so that does not tell you anything. Now, if the two opposite coins show the same face, then you are clearly in situation C, so flip both to finish the game. If the opposite coins show different faces though, you know you are in situation B, but flipping any coins does not improve your state, meaning that you can move to state 2, and finish the game in at most 2 moves. Hence, in the worst-case scenario, it will take 3 moves to finish the game when in state 4.
Finally, at the start of the game you are in state 5. You can now finish the game in at most 5 moves by doing the following: select two adjacent cups, make sure both coins are heads, and then on the next turn choose two opposite cups, and again make sure both coins are heads. This will ensure that yo have flipped exactly three coins to heads, and thus that you are in state 3 if you didn't finish the game already. And, as we saw, it takes at worst 3 more moves to finish the game.
Notice that you could of course have made the three coins tails as well, and you can also first choose two opposite coins and then choose adjacent coins.... although for an optimal strategy that minimizes the expected number of turns, you should first pick two adjacent cups, and then pick two opposite cups, for if the opposite cups show the same face, then you can flip one of them to ensure you get to configuration B.
OK, but why can't you do better, starting at the very start of the game? Well, after one turn you have not seen two of the coins, and hence you cannot know whether you are in situation A, B, or C.
Moreover, it is impossible to get to either state 1 or 2 in just 2 moves from the start:
First of all, if you make the same 'choice-move' for the first two moves, then it is possible to see the exact same two coins twice, meaning that you do not know anything about the other two coins, except that, given that you made the two coins you saw face the same way, the other two coins are not both also facing that way. Hence, you do not know whether one of both of the other two coins are facing the other way, and hence you can't get to state 1 or 2 that way.
However, if you make two different 'choice-moves' for the first two moves, then you are guaranteed to see exactly three different coins, meaning that you do not know the orientation of the fourth coin (and hence again you cannot get to state 1 or 2), unless you made all the 3 coins face the same way ... which is exactly what we claimed was the optimal strategy, and which leads to a worst-case of 5 moves. |
Computing the nth-derivative $\frac{d^{n}}{d\lambda^{n}}e^{\lambda x-\frac{\lambda^{2}}{2}t}$ | By using
\begin{align}
D^{n}[ f(x) \, g(x) ] = \sum_{k=0}^{n} \binom{n}{k} \, f^{k}(x) \, g^{n-k}(x)
\end{align}
then it is seen that
\begin{align}
D^{n} [ e^{a x} \, e^{- b x^{2}/2} ] &= e^{ax} \, \sum_{k=0}^{n} \binom{n}{k} \, D^{k}(e^{-bx^{2}/2}) \, a^{n-k} \\
&= e^{ax- bx^{2}/2} \, \sum_{k=0}^{n} (-1)^{k} \binom{n}{k} \, \left(\frac{b}{2}\right)^{k/2} \, a^{n-k} \, H_{n}\left( \sqrt{\frac{b}{2}} x \right). \\
\end{align}
Now using the formula
\begin{align}
H_{n}(x+y) = \sum_{k=0}^{n} \binom{n}{k} \, H_{k}(x) \, (2y)^{n-k}
\end{align}
then it is seen that
\begin{align}
H_{n}\left( \sqrt{\frac{b}{2}} \, x - \frac{a}{\sqrt{2b}} \right) = (-1)^{n} \left(\frac{b}{2}\right)^{n/2} \, \sum_{k=0}^{n} \binom{n}{k} \, \left(- \frac{2a}{\sqrt{2b}}\right)^{n-k} \, H_{n}\left( \sqrt{\frac{b}{2}} x \right).
\end{align}
Now
\begin{align}
D^{n} [ e^{a x} \, e^{- b x^{2}/2} ] &= (-1)^{n} \left(\frac{2}{b}\right)^{n/2}
e^{ax- bx^{2}/2} \, H_{n}\left( \sqrt{\frac{b}{2}} \, x - \frac{a}{\sqrt{2b}} \right)
\end{align} |
Where am I going wrong in finding $\tan10^{\circ}$ form $\tan30^{\circ}$ using $\tan3A$ formula? | $\tan 3x=\tan30\implies 3x=30+n\cdot180°\implies x=10+n\cdot60°$ |
Combinations of n items that fill the entire volume | For $n=k$, it is the coefficient of $x^k$ in:
$$\prod_{i=1}^k \frac1{1-x^i}$$ |
What is the theoretical expected value of the sum of the highest 3 values of 4 fair dice? | The mean of the sum of the $4$ is $(4)(7/2)=14$. Let $\mu$ be the mean value of the smallest number. Then the mean of the three largest is $14-\mu$.
So we only need to find $\mu$. Let random variable $X$ be the smallest number obtained. We will know $\mu=E(X)$ once we know the distribution of $X$.
So let us find $\Pr(X=1)$, $\Pr(X=2)$, and so on up to $\Pr(X=6)$.
It is simplest to calculate the probabilities backwards. So we find first the probability that $X=6$. This is the probability all throws are $6$, which is $(1/6)^4$.
Next we calculate $\Pr(X=5)$. This is the probability that all throws are $\ge 5$, minus the probability they are all $\ge 6$. So $\Pr(X=5)=(2/6)^4-(1/6)^4$.
Next we calculate $\Pr(X=4)$. This is the probability all throws are $\ge 4$, minus the probability they are all $\ge 5$. So $\Pr(X=4)=(3/6)^4-(2/6)^4$.
Continue. At the end we find $\Pr(X=1)=(6/6)^4-(5/6)^4$.
Now use the usual formula for expectation once we know the distribution.
Remark: There is another way of doing it that introduces a useful trick.
Note that
$$E(X)=1\cdot\Pr(X=1)+2\cdot\Pr(X=2)+3\cdot\Pr(X=3)+\cdots+6\cdot\Pr(X=6).$$
Rearrange this sum. We get
$$\left[\Pr(X=1)+\Pr(X=2)+\cdots+\Pr(X=6)\right]+\left[\Pr(X=2)+\cdots+\Pr(X=6)\right]+\left[\Pr(X=3)+\cdots +\Pr(X=6)\right]+\cdots+\left[\Pr(X=6)\right] $$
The first sum in square brackets is $1$. The second is the probability that the minimum is $\ge 2$, which is $(5/6)^4$. (All tosses must be $\ge 2$). Similarly, the third term in square brackets is $(4/6)^4$, and so on up to $(1/6)^4$. It follows that
$$\mu=1+(5/6)^4+(4/6)^4+\cdots +(1/6)^4.$$ |
Showing a finite sum involving Gamma functions adds to zero | I found a way to evaluate this sum using residues. In the following, let $b!$ denote $\Gamma(b+1)$ for non-integer $b$.
Rewrite the original sum as
$$\sum_{n=0}^j \frac{2n-j-\nu}{n!(j-n)!(n-\nu)!(j-n+\nu)!}
=\frac{1}{j!^2}\sum_{n=0}^j (2n-j-\nu)\binom{j}{n}\binom{j}{n-\nu}$$
Now we use
$$\binom{a}{b}=\text{Res}_{x=0}\frac{(1+x)^a}{x^{b+1}},\ a\in\mathbb{Z}_{\geq 0}$$
Our sum then equals
$$\text{Res}_{x=0}\frac{1}{j!^2}\sum_{n=0}^j (2n-j-\nu)\binom{j}{n}\frac{(1+x)^j}{x^{n-\nu+1}}=\text{Res}_{x=0}\frac{(1+x)^j}{j!^2 x^{1-\nu}}\sum_{n=0}^j (2n-(j+\nu))\binom{j}{n}\frac{1}{x^n}$$
Then we use
$$\sum_{n=0}^j\binom{j}{n}x^n=(1+x)^j,\ \sum_{n=0}^j n\binom{j}{n}x^n=jx(1+x)^{j-1}$$
to see our sum is
$$\text{Res}_{x=0}\frac{(1+x)^j}{j!^2 x^{1-\nu}}\left(2j\frac{1}{x}(1+1/x)^{j-1} -(j+\nu)(1+1/x)^j\right)
=\text{Res}_{x=0}\frac{(1+x)^{2j-1}}{j!^2 x^{j+1-\nu}}(2j-(j+\nu)(1+x))
=\text{Res}_{x=0}\frac{(1+x)^{2j-1}}{j!^2 x^{j+1-\nu}}(j-\nu-(j+\nu)x)$$
$$=\frac{1}{j!^2}\left((j-\nu)\binom{2j-1}{j-\nu}-(j+\nu)\binom{2j-1}{j-\nu-1}\right)=\frac{1}{j!^2}\left((j-\nu)\binom{2j-1}{j-\nu}-(j-\nu)\binom{2j-1}{j-\nu}\right)=0$$
In evaluating the residue we assume $j>0$.
I am still interested in seeing if there are simpler or alternate techniques for showing this equality. |
Show that $X$ is not an affine variety | Here is an alternative to the above answers with a more algebra-y flavor. (For those interested, this is exercise 1.2.8 of Cox, Little, and O'Shea.) Given $f(x,y) \in \mathbb{V}(X)$, let $g(t) = f(t,t)$. Then $g \in \mathbb{R}[t]$ vanishes on $\mathbb{R} \setminus \{1\}$. But a nonzero polynomial of one variable over a field has only finitely many roots, so we must have $g=0$. Then $f(1,1) = g(1) = 0$, so $f$ vanishes at $(1,1)$. |
What was the number of blue marbles in the bag before any changes were made? | Let $x=$ number of blue marbles originally, and $y=$ number of green marbles originally.The correct equations should be:$$\frac25=\frac{y-3}{x+y-3}$$$$\frac58=\frac{x+7}{x+y+7}$$
The answer I got is $x=18,y=15$ |
proposition about a nowhere dense set | HINT: Suppose first that for each non-empty open set $U\subseteq X$ there is an open ball $B(x,\epsilon)\subseteq U$ such that $A\cap\operatorname{cl}B(x,\epsilon)=\varnothing$; you want to show that $\operatorname{int}\operatorname{cl}A=\varnothing$. Let $U=\operatorname{int}\operatorname{cl}A$; certainly $U$ is open. If $U\ne\varnothing$, there is an open ball $B(x,\epsilon)\subseteq U$ such that $A\cap\operatorname{cl}B(x,\epsilon)=\varnothing$. Clearly $A\cap B(x,\epsilon)=\varnothing$; combine this with the fact that $x\in U$ to get a contradiction.
Now suppose that $A$ is nowhere dense in $X$, so that $\operatorname{int}\operatorname{cl}A=\varnothing$, and let $U$ be a non-empty open set in $X$; you want to show that there are an $x\in U$ and an $\epsilon>0$ such that $B(x,\epsilon)\subseteq U$ and $A\cap\operatorname{cl}B(x,\epsilon)=\varnothing$. $U\nsubseteq\operatorname{cl}A$ (why?), so pick any $x\in U\setminus\operatorname{cl}A$. There is a $\delta>0$ such that $B(x,\delta)\cap A=\varnothing$ (why?). Now how can you choose $\epsilon>0$ so that $A\cap\operatorname{cl}B(x,\epsilon)=\varnothing$ and $B(x,\epsilon)\subseteq U$? |
$S^{-1}M$ is a Noetherian $S^{-1}R$ module | First of all, you can define an $R$-module structure on $S^{-1}M$ by restriction of scalars. In other words, the scalar multiplication is $r.\frac{m}{s}=\frac{r}{1}\cdot\frac{m}{s}$. It is easy to check that any $S^{-1}R$-submodule of $S^{-1}M$ is also an $R$-submodule.
Now, we can define a map $f:M\to S^{-1}M$ by $f(m)=\frac{m}{1}$. Again, it is straightforward that this is a homomorphism of $R$-modules. Thus, if $K\subseteq S^{-1}M$ is a submodule over $S^{-1}R$ (and thus also a submodule over $R$) then $f^{-1}(K)$ is a submodule of $M$ as an inverse image of a submodule of $S^{-1}M$.
Now let $K\subseteq S^{-1}M$ be an $S^{-1}R$-submodule, and let $N=f^{-1}(K)$. We will show that indeed $K=S^{-1}N$. If $\frac{n}{s}\in S^{-1}N$ where $n\in N, s\in S$ then by definition $f(n)=\frac{n}{1}\in K$, and since this is an $S^{-1}R$-module we have $\frac{n}{s}=\frac{1}{s}\cdot\frac{n}{1}\in K$ as well. Conversely, suppose $\frac{n}{s}\in K$. Then $\frac{n}{1}=\frac{s}{1}\cdot\frac{n}{s}\in K$ as well, and so $n\in f^{-1}(K)=N$, which means $\frac{n}{s}\in S^{-1}N$. So by two sided inclusion indeed $S^{-1}N=K$.
Alright, now your exercise is really easy. Suppose $K_1\subseteq K_2\subseteq K_3\subseteq...$ is an increasing sequence of $S^{-1}R$-submodules of $S^{-1}M$. For each $n\in\mathbb{N}$ let $N_n=f^{-1}(K_n)$. Then $N_1\subseteq N_2\subseteq...$ is an increasing sequence of $R$-submodules of $M$. Since $M$ is a Noetherian $R$-module there is some $n\in\mathbb{N}$ such that $N_i=N_n$ for all $i\geq n$. As we have shown, this implies $K_i=S^{-1}N_i=S^{-1}N_n=K_n$ for all $i\geq n$. |
Find the solution for $x$ with $0 \le x \lt 13$ so $13 \mid 3x^2+9x+7 $. | Let's work in $\mod 13$. We want $3x^2+9x+7 \equiv 0 \pmod {13}$. Now $7 \equiv -6$ so you have $3x^2+9x-6 \equiv 0 \pmod {13}$. Because $\gcd(3, 13) = 1$, we can "divide both sides" by $3$ and get $x^2+3x-2 \equiv 0 \pmod {13}$
Notice that $-2 \equiv -28$, so $x^2+3x-28 \equiv 0 \pmod {13}$ which you can factor to $(x-4)(x+7)\equiv 0 \pmod {13}$. Because $13$ is prime, we have $x \equiv 4$ or $x \equiv -7 \equiv 6$. |
Show that if $G$ is a noncomplete graph of order $n$, then $t(G)≤\frac {n-α(G)}{α(G)}$ | By removing all vertices except an independant set of size $\alpha(G)$, it is possible to split $G$ into $\alpha(G)$ components.
Since $G$ is $t(G)$-tough, we must have removed at least $t(G)\alpha(G)$ vertices.
We conclude $$t(G)\alpha(G)\le n-\alpha(G)$$ |
Solution to pde $f_x =a f +b+ c$ | Since $b$ and $c$ are both constants, let $d := b+c$. The differential equation is
$$\frac{\partial f}{\partial x} = af + d$$
Since there are no derivatives with respect to $y$, we can solve this like an ODE - we only have to be careful when considering the constants of integration.
By separation of variables, we have
$$\int\frac{df}{af+d} = \int dx$$
Calculating the integrals, we have
$$\frac{1}{a}\log|af+d| = x + g(y)$$
What would be a constant of integration on the RHS in an ODE is an arbitrary function of $y$, as this will vanish when taking the partial derivative with respect to $x$.
\begin{align*}
&\log|af+d| = ax+ag(y)\\
&\Rightarrow|af+d| = e^{ag(y)}e^{ax} \\
&\Rightarrow af = \pm e^{ag(y)}e^{ax} -d \\
&\Rightarrow f(x,y) = \frac{G(y)e^{ax} - d}{a}
\end{align*}
where $G(y) := \pm e^{ag(y)}$ is another arbitrary function of $y$. |
Is quasi-isomorphism an equivalence relation? | $\def\ZZ{\mathbb Z}$The relation $E \sim F$ defined by «there exists a morphism $E\to F$ inducing an isomorphism in homology» is not an equivalence relation because it is not symmetric (it is relfexive and transitive)
For example, there is a morphism from
$$\cdots 0\to \ZZ\xrightarrow2\ZZ\to0\to\cdots$$
to the complex
$$\cdots 0\to 0\to\ZZ/2\ZZ\to0\to\cdots$$
inducing an isomorphism in homology, but there is no non-zero morphism in the other direction.
The useful relation is the symmetric closure of this relation. |
Marginal Cost and Avg cost optimization problem | all are correct but (ii) may seem more direct if you divide the fraction to get
$$
A(x) = 16000 x^{-1} + 200 + 4x^{1/2}
$$
so
$$
A'(x) = 2x^{-1/2} - 16000x^{-2}
$$
thus $A'(x) = 0$ yields $8000 = x^{3/2}$ so $x = 400$. |
Finding the family of functions given an equation | Hint:
$$\left(x^3+\dfrac1{x^3}\right)\left(x^2+\dfrac1{x^2}\right)=x^5+\dfrac1{x^5}+x+\dfrac1x$$
Now $x^3+\dfrac1{x^3}=\left(x+\dfrac1{x}\right)^3-3\left(x+\dfrac1{x}\right)$ |
Modified simpson's rule for varying step sizes | So here's the uninspired algebra: Setting $h=x_2-x_1$ and $k=x_3-x_2$, we want
\begin{align*}
y_1 &= ah^2-bh+c \\
y_2 &= c \\
y_3 &= ak^2+bk+c.
\end{align*}
Solving, you get
\begin{align*}
a&= \frac{-ky_1+(h+k)y_2-hy_3}{hk(h+k)} \\
b&= \frac{-k^2y_1+(h^2-k^2)y_2-h^2y_3}{hk(h+k)} \\
c&= y_2.
\end{align*}
Ugh. |
Showing a subspace is not simply connected. | Hint (perhaps too big):
$\,\emptyset\neq K\subset\Bbb R^2\,$ is closed and bounded, and thus there's some circle
$$C:=\{(x,y)\in\Bbb R^2\;\;;\;\;x^2+y^2=R^2\}\,\,\,s.t.\,\,\,K\subset\,\,\, \stackrel{\circ}C$$
But then the path $\,C\subset\Bbb R^2\setminus K\,$ cannot be nullhomotopic in $\,\Bbb R^2\setminus K\,$ ... |
Apparent vicious cycle for compactness of a sphere in FDVS | One usually uses Heine-Borel theorem to show that the closed unit ball of $\mathbb R^n$ is compact. The arguments then go as follows:
$$
{S^{n-1}\subset \mathbb R^n} \text{ is compact}\\
\Downarrow\\
\text{ all norms on }\mathbb R^n\text{ are equivalent}\\
\Downarrow\\
V \text{ is isomorph to } \mathbb R^n
$$
This provides an entry into the circle. |
Linear Algebra- Matrix derivative | See, for instance, Wikipedia, here and here. |
Smallest possible dimensions of a piece of paper after one fold. | We fold the $1\times 1$ paper once and cover it with a rectangle of sides $a,b$.
We want $x=\min\{a,b\}$ such that the smaller side is minimized.
Showing $x\le 1/2$ is easy. Fold the paper in half and align the opposite edges. We get exactly a $1/2$ by $1$ rectangle. The smallest rectangle that covers a rectangle, is that rectangle.
Now we need to also prove $x\ge1/2$ and we are done. Consider that the paper is cut along the folding line. We get two pieces. It is clear that the rectangle covering the folded paper must at least be able to cover each of these pieces individually.
If the cut goes over two neighbouring sides, it is clear that we need at least a $1$ by $1$ covering rectangle which is not minimal as we already know $1\not \le 1/2$. In the limiting case where cuts are along the opposite corners, we have $x\ge \frac{\sqrt2}{2}\approx 0.7071\not \le 0.5$ by rotating to align the height of the triangular piece to the bounding rectangle, which still isn't the true lowest lower bound. Hence, assume otherwise: the cut goes over two opposing sides of the paper.
Both cut pieces $i=1,2$ will be right angle trapezoids with sides $1,a_i,b_i$ where $a_i||b_i$, and $a_i,b_i$ are perpendicular to side $1$, and such that $1=a_1+a_2=b_1+b_2=1$. (The area of larger of the two trapezoids is then at least $A\ge 1/2$.)
WLOG $a_1\ge b_1$. This implies that $b_2\ge a_2=1-a_1$.
We need to find the minimal covering rectangles for these two right angled trapezoids. WLOG consider that the covering rectangle is parallel to the coordinate axes, and that the trapezoid is rotated by some $\phi_i\in[0,\pi/2]$. We want to find $\phi_i$ such that the rectangles $r_i$ of sides $p_i,q_i$ covering the right angle trapezoids $i\in\{1,2\}$ have minimal smaller sides $\min_{\phi_i}\{p_i,q_i\}$.
If the rotation is $0$ or $\pi/2$ then sides $1,a_i,b_i$ are parallel to the sides of the covering rectangle $r_i$. Therefore, the larger side of the covering rectangle is $p_i=1$ and the smaller is $q_i=\max\{a_i,b_i\}$. We want to be able to cover at least the larger of the two trapezoid pieces:
$$x\ge\min\{\max\{a_1,b_2\},1\}=\min\{a_1,1-a_1\}=1/2$$
This is a true lower bound if for $\phi_i\in(0,\pi/2)$, we can't do lower than $1/2$.
Assume we can. WLOG $q_i\ge 1$ is larger and $p_i\le 1/2$ is a better (or equal) smaller side. We can imagine extending $q_i$ infinitely since we only care about minimizing $p_i$, the smaller dimension. Now it is clear that $p_i$ is minimized by minimizing the height of the highest vertex of the trapezoid, which is precisely when the fourth side of the trapezoid is parallel to $q_i$.
The larger of the minimal dimensions $p^*_i$ (of the two $i=1,2$) needed to cover both is the one for some $i$ such that $a^*_i=\max\{a_i,b_i\}$. The $p^*_i$ is the height from point where $a^*_i,1$ sides meet, to the fourth side. The $1$ side needs to be extended beyond where $b^*_i,1$ meet to form a right angled triangle. The extension is by $\frac{a^*_i}{\sqrt{{c^{*}_i}^2-1}}$ (See the similar triangles). Similarly, the extension of the fourth side in the same direction is $\frac{a^*_i}{\sqrt{1-{c^{*}_i}^{-2}}}$. The $c^*_i$ value is the length of the fourth side.
$$c^*_i=\sqrt{(a^*_i-b^*_i)^2+1}$$
The area of the right angled triangle obtained by the extension is:
$$
\frac12 p^*_i \left(c^{*}_i+\frac{a^*_i}{\sqrt{1-{c^{*}_i}^{-2}}}\right)=\frac12 a^*_i \left(1+\frac{a^*_i}{\sqrt{{c^{*}_i}^2-1}}\right)
$$
Where we have now $p^*_i=f(a^*_i,b^*_i)$ which we want to minimize to get $a$. Notice we have $a^*_i\ge 1-a^*_i,a^*_i\ge 1-b^*_i$ and $a^*_i \ge b^*_i\ge 0$ because these are the maximal dimensions among the two pieces.
The global minimum is precisely when $a^*_i=b^*_i=0.5$ giving a $1/2$ by $1$ rectangle, implying the lower bound $x\ge 1/2$. This is precisely our initial $1/2$ by $1$ covering rectangle case we had as an upper bound at the beginning. (You can use wolfram to double check.)
That is, lower bound = upper bound = $1/2 = x$, regardless of rotation.
This finishes the proof. |
Is it true that $log(i) = \frac\pi2i$ ? If so, are both of these legitimate proofs? They seem too beautiful not to be... | It depends on how you define "logarithm" in the first place. But the simplest and most straightforward approach would be to define
Definition. $\log x$ means the number $y$ such that $e^y=x$.
With this definition the first two lines of your first proof is a complete, direct proof that $\log i = \frac12\pi i$.
The only problem is that with this reasoning $\log i$ can also be shown to be $\frac52\pi i$ and $\frac 92\pi i$ and $\frac{13}2\pi i\ldots$ because the exponential function of those numbers are also $i$.
This is not just a minor deficiency of the definition, though -- the complex logarithm is inherently multivalued in the precise sense that it is not possible to define a single function that is continuous on all of $\mathbb C\setminus\{0\}$ such that $e^{f(z)}=z$ everywhere.
The usual fudging one does in order to overcome this is to speak of the "principal logarithm", notated $\operatorname{Log}$ with a capital L, and define
$\operatorname{Log} z$ means the number $y$ such that $e^y=x$ and the imaginary part of $y$ is in $(-\pi,\pi]$.
This is, unfortunately, discontinuous on the negative real axis, but we can't really do better.
Your second proof is an ingenious manipulation, but of a kind that will easily lead to nonsense if you're not careful. Basically you're taking the power series for $\log(1+z)$ which will only work for $|z|\le 1, z\ne -1$ and then just assuming that this extends to a function on all of $\mathbb C\setminus 0$ that satisfies the usual logarithm rules everywhere. But this is not the case -- with the same reasoning we could say
$$ 1 = \Bigl(\frac{1+i}{\sqrt2}\Bigr)^8 $$
so
$$ 0 = \log 1 = 8\log\Bigl(\frac{1+i}{\sqrt2}\Bigr) $$
which requires the logarithm on the right to be $0$ -- but actually the series will yield $\frac14\pi i$.
This is the multi-valuedness of the logarithm that strikes again -- it is not possible for any nontrivial differentiable function on $\mathbb C\setminus\{0\}$ to satisfy $f(ab)=f(a)+f(b)$ everywhere. |
Index of a Jordan curve | Look, for example, at pp. 85-91 of Guillemin and Pollack for a version of this valid for smooth hypersurfaces in $\Bbb R^n$. |
Prove $f(x) =x^2$ is continuous at $x_0=2$ using the $\epsilon$-$\delta$ definition | Note that $|x+2|<5$ for $|x-2| < 1$. So lets take $|x-2| < 1$. Then you have
$$|x-2||x+2| < |x-2|5$$
If $|x-2| < \tfrac \epsilon5$ you get $|x-2|5 < \epsilon$. We want
$|x-2| < 1$
$|x-2| < \tfrac\epsilon5$
Thus we take $\delta = \min\{1,\tfrac\epsilon5\}$, because if $|x-2|<\min\{1,\tfrac\epsilon5\}$ then both of the above conditions are fulfilled. |
Rewriting the expression $\sum_{p}\sum_{k=1}^{\infty}\frac{1}{k}\frac{1}{p^{ks}}=\sum_{n=2}^{\infty}\frac{\Lambda (n)}{\log n}\frac{1}{n^s}$ | Note that $\Lambda(n) = 0$ unless $n$ is equal to a prime power, so that $n = p^k$ for some prime $p$ and some integer $k \geq 1$. This means that we can delete all of the terms in the sum for which $n$ is not of this form (because these terms are $0$), so that
\[\sum_{n = 2}^{\infty} \frac{\Lambda(n)}{n^s \log n} = \sum_{\substack{n = 2 \\ n = p^k \text{ for some prime $p$ and $k \geq 1$}}}^{\infty} \frac{\Lambda(n)}{n^s \log n}.\]
If $n = p^k$, then $\Lambda(n) = \log p$, so that this is
\[\sum_{\substack{n = 2 \\ n = p^k \text{ for some prime $p$ and $k \geq 1$}}}^{\infty} \frac{\log p}{(p^k)^s \log p^k} = \sum_{\substack{n = 2 \\ n = p^k \text{ for some prime $p$ and $k \geq 1$}}}^{\infty} \frac{1}{k p^{ks}}\]
since $(p^k)^s = p^{ks}$ and $\log p^k = k \log p$.
Finally, the sum over all integers $n \geq 2$ of the form $p^k$ for some prime $p$ and $k \geq 1$ is the same as the sum over all primes $p$ and all integers $k \geq 1$ (that is, there is a bijection between these two sets), so this is equal to
\[\sum_p \sum_{k = 1}^{\infty} \frac{1}{k p^{ks}}.\] |
How to calculate $π$ from first principles | I think that what you're after is what Archimedes did in his text Measurement of a Circle. There, he proved, using geometric principles and no series, that the area of any circle is equal to a right-angled triangle in which one of the sides about the right angle is equal to the radius, and the other to the circumference, of the circle. If we define $\pi$ as then number such that the perimeter of a circle is $\pi$ times its diameter, then what this means the the area of a circle with radius $r$ is $\pi r^2$. |
Is $x^2 \sin (x)$ periodic function? | For a function to be periodic there must exist a constant $\tau$ such that $f(x+\tau)=f(x)$ for all $x$. The smallest such $\tau$ is the fundamental period (or just the period as opposed to a period)
From what you say you should conclude that your function is not periodic (as you observe the amplitude is changing), what it is is oscillatory, but not periodic. |
What is a primary decomposition of the ideal $I = \langle xy, x - yz \rangle$? | You can only use geometry to find the primary decomposition if the given ideal is actually an intersection of prime ideals. In your case, it is true that
$$
V(xy,x-yz) \;=\; V(x,y) \cup V(x,z)
$$
but
$$
(xy,x-yz) \;\subsetneq\; (x,y) \cap (x,z).
$$
For example, $x\in (x,y)\cap (x,z)$, but $x\notin (xy,x-yz)$.
Instead, the given ideal is the intersection of two primary ideals
$$
(xy,x-yz) \;=\; Q_1 \cap Q_2
$$
where the radicals of $Q_1$ and $Q_2$ are the prime ideals that you have found:
$$
\sqrt{Q_1}=(x,y) \qquad\text{and}\qquad \sqrt{Q_2} = (x,z).
$$
To find $Q_1$ and $Q_2$, observe that
$$
(xy,x-yz) \;=\; (y^2z,x-yz),
$$
since $xy - y^2 z = y(x-yz)$. We can now factor the $y^2z$:
$$
(y^2z,x-yz) \;=\; (y^2,x-yz) \cap (z,x-yz)
$$
To prove this equation, observe that all three ideals contain $(x-yz)$. The quotient $k[x,y,z]/(x-yz)$ is isomorphic to $k[y,z]$, and obviously $(y^2z)=(y^2)\cap (z)$ in $k[y,z]$, so lifting back to $k[x,y,z]$ gives the desired equation.
It is easy to see that $(y^2,x-yz)$ is primary, and $(z,x-yz) = (x,z)$ is actually prime. We conclude that
$$
Q_1 = (y^2,x-yz) \qquad\text{and}\qquad Q_2 = (x,z)
$$ |
Multiple solutions of the Cat Map | Presumably you mean the number of solutions in $X = [0,1)^2$.
$S(v) = v_0$ means $Av - v_0 \in \mathbb Z^2$.
Now $A$ maps the square $X$ to the parallelogram $P$ with vertices $\pmatrix{0\cr0\cr}$, $\pmatrix{a\cr c\cr}$, $\pmatrix{a+b\cr c+d\cr}$, $\pmatrix{b\cr d\cr}$, which has area $|\det(A)|$. If you cut this parallelogram along the grid lines and translate the pieces back to $X$ (by an integer in each coordinate), each point of $X$ will be covered an average of $|\det(A)|$ times. Now you just need to show that each point will be covered the same number of times. |
Mean distance between two points in a square polar coordinates conversion | For a vertical line we have the equation
$$ x=1 $$
But in polar coordinates $x=r \cos(\theta)$. Our equation becomes
$$ r \cos(\theta)=1 $$
$$ r = 1 / \cos(\theta)$$
The integration region includes the origin. The radius of the origin is $0$. $r$ is also nonnegative. Therefore the smallest value of $r$ is $0$. |
Series expansion of the determinant for a matrix near the identity | Note that the determinant of a matrix is just a polynomial in the components of the matrix. This means that the series in question is finite. This also means that the functions $f_1,f_2,\dots,f_N$ are polynomials.
We start by computing the term which is first order in $\epsilon$.
Let $A_1, A_2, \dots , A_N$ be the column vectors of the matrix $A$. Let $e_1,e_2,\dots, e_N$ be the standard basis; note that these basis vectors form the columns of the identity matrix $I$. Then we recall that the determinant is an alternating multi-linear map on the column space.
$$ \mathrm{det}( I + \epsilon A ) = \mathrm{det}( e_1 + \epsilon A_1, e_2 + \epsilon A_2, \dots , e_N + \epsilon A_N ) $$
$$ = \mathrm{det}( e_1, e_2, \dots , e_N ) + \epsilon \lbrace \mathrm{det}(A_1,e_2,\dots, e_N) + \mathrm{det}(e_1,A_2,\dots, e_N) + \cdots + \mathrm{det}(e_1,e_2,\dots, A_N)\rbrace + O(\epsilon^2)$$
The first term is just the determinant of the identity matrix which is $1$. The term proportional to $\epsilon$ is a sum of expressions like $\mathrm{det}(e_1,e_2,\dots, A_j, \dots, e_N)$ where the $j$'th column of the identity matrix is replaced with the $j$'th column of $A$. Expanding the determinant along the $j$'th row we see that $\mathrm{det}(e_1,e_2,\dots, A_j, \dots, e_N) = A_{jj}$.
$$ \mathrm{det}( I + \epsilon A ) = 1 + \epsilon \sum_{j=1}^N A_{jj}+ O(\epsilon^2) = 1 + \epsilon \mathrm{Tr}(A) + O(\epsilon^2)$$
$$\boxed{ f_1(A) = \mathrm{Tr}(A) } \qquad \textbf{(1)} $$
We have the first term in our series in a computationally simple form. Our goal is to obtain higher order terms in the series in a similar form. To do this we will have to abandon the current method of attack and consider the determinant of the exponential map applied to a matrix.
If $A$ is diagonalizable then we can define $\exp(A)$ in terms of its action on the eigenvectors of $A$. If $a_1,a_2,\dots , a_N$ are eigenvectors of $A$ with eigenvalues $\lambda_1, \lambda_2, \dots, \lambda_N$ then $\exp(A)$ is the matrix which satisfies,
$$ \exp(A) a_j = e^{\lambda_j} a_j.$$
It is not hard to show that $\det( \exp(A) ) = \exp(\mathrm{Tr}(A))$. Since $A$ is linear operator on a finite dimensional vector space it has a finite norm. This means that we can safely evaluate the exponential map as an infinite series. The infinite series is consistent with our definition in terms of the eigenbasis. Consider the following,
$$\det( \exp(\epsilon A) ) = \exp(\epsilon \mathrm{Tr}(A))$$
$$\det( I + \epsilon A + \frac{\epsilon^2}{2} A^2 +\cdots ) = 1 + \epsilon \mathrm{Tr}(A) + \frac{\epsilon^2}{2} (\mathrm{Tr}(A))^2 + \cdots \qquad (*)$$
On the left hand side of $(*)$ we can factor an $\epsilon$ an write,
$$ \det( I + \epsilon \lbrace A + \frac{\epsilon}{2} A^2 +\cdots \rbrace ) = 1 + \epsilon \mathrm{Tr}(A + \frac{\epsilon}{2} A^2 + \cdots)+ \epsilon^2 f_2( A + \frac{\epsilon}{2} A^2 + \cdots ) + O(\epsilon^3)$$
$$
= 1 + \epsilon \mathrm{Tr}(A) + \epsilon^2 \lbrace \frac{1}{2}\mathrm{Tr}( A^2)+f_2( A + \frac{\epsilon}{2} A^2 + \cdots ) \rbrace + O(\epsilon^3)$$
Now we compare the second order terms in $\epsilon$ and obtain,
$$ \frac{1}{2}\mathrm{Tr}( A^2)+f_2( A + \frac{\epsilon}{2} A^2 + \cdots ) = \frac{1}{2}(\mathrm{Tr}(A))^2, $$
Now allow $\epsilon \rightarrow 0 $,
$$ \boxed{f_2( A ) = \frac{\mathrm{Tr^2}(A)-\mathrm{Tr}(A^2)}{2}} \qquad \textbf{(2)}. $$
The higher order terms can be obtained systematically using the same trick as above though the computations become more involved. |
Prove this inequality holds $e^x+(\ln{x}-1)\sin{x}>0$ | Outline: We first look at $x$ in the interval $(0,e)$. Note that $e^x\gt 1+x$ and $0\lt \sin x\lt x$. So it will be enough to show that $1+x+(\ln x-1)x\gt 0$ in the interval.
Now we can use the derivative. Let $g(x)=1+x\ln x$. Show that the minimum value of $g(x)$ in the interval $(0,e)$ is positive.
The interval $[e,\infty)$ is easier to deal with. Because $\sin x\ge -1$, on the interval $[e,\infty)$ our function is $\ge e^x-(\ln x-1)$, which is positive at $e$ and increasing in $[e,\infty)$. |
What is the difference between identically distributed random variables ? | It is true that $$P(X=x) = P(Y=x)$$ But that does not imply that the variables are the same.
You and I toss a coin; call my toss $X$ and yours $Y$. Then $$P(X=head) = P(Y=head) = 1/2$$ But that does not mean that if I get head, then you will get head too, i.e. $X \neq Y$. |
Riemann Stieltjes integral and uniformly convergence. | Using integration by parts,
$$\int_a^b f \, dg_n = f(b)g_n(b) - f(a)g_n(a) - \int_a^b g_n \, df.$$
Hence,
$$\lim_{n \to \infty}\int_a^b f \, dg_n= \lim_{n \to \infty}[f(b)g_n(b) - f(a)g_n(a)]- \lim_{n \to \infty}\int_a^b g_n \, df \\ = f(b)g(b) - f(a)g(a)- \lim_{n \to \infty}\int_a^b g_n \, df .$$
Since $g_n$ converges uniformly to $g$ we can interchange limit and integral as
$$\lim_{n \to \infty}\int_a^b g_n df = \int_a^b g \, df. $$
The interchange is valid as
$$\left|\int_a^b g_n df - \int_a^b g \, df \right| = \int_a^b (g_n - g) df \leqslant \sup_{x \in [a,b]}|g_n(x) - g(x)|[f(b) - f(a)] \to 0.$$
Using a second integration by parts,
$$\lim_{n \to \infty}\int_a^b f \, dg_n= f(b)g(b) - f(a)g(a)-\int_a^b g_ \, df = \int_a^bf \, dg.$$ |
Ring of infinite sequences. | Explicitly, all of the axioms hold because they hold componentwise.
This is the hardest question and I'll save it for last.
Explicitly, an inverse is a componentwise inverse.
$R^{\infty}$ won't be a UFD even if $R$ is; the problem turns out not to be the uniqueness of factorizations but existence. An element of $R^{\infty}$ is irreducible iff it has the form $\prod r_i$ where exactly one $r_i \in R$ is irreducible and the others are units. So the elements which are products of irreducibles have the form $\prod r_i$ where all but finitely many of the $r_i$ are units. For example, $\prod p_i \in \mathbb{Z}^{\infty}$, where the $p_i$ are the primes, is not a product of irreducibles. However, we do have an "infinite irreducible factorization" given by the componentwise irreducible factorization.
I don't know what you mean by "structure" here.
Now for some words on the ideals. Already the prime ideals are quite complicated. First, let's observe that there are some obvious ones, namely: if $P$ is any prime ideal of $R$, then there's a projection $\pi_i : R^{\infty} \to R$ given by taking the $i^{th}$ coordinate, and then a quotient map $R^{\infty} \to R \to R/P$. These are the "obvious" prime ideals, one copy of $\text{Spec } R$ for every coordinate. Geometrically, finite products of commutative rings correspond to disjoint unions of affine schemes.
But infinite products are more complicated. As is done in this answer by Eric Wofsey and this blog post, IMO the cleanest way to proceed is as follows. It turns out that there is a natural continuous map
$$\text{Spec } R^{\mathbb{N}} \to \beta \mathbb{N}$$
which sends a prime ideal $P$ of $R^{\mathbb{N}}$ to an ultrafilter $U$ on $\mathbb{N}$. This ultrafilter has the property that $P$ comes from a prime ideal of the quotient $R^{\mathbb{N}}/U$, which is an ultrapower of $R$. This ultrapower is elementarily equivalent to $R$, and so for example if $R$ is a field then so is any ultrapower, so in that case the space $\beta \mathbb{N}$ of ultrafilters is a complete description of the spectrum.
In general the ultrapower $R^{\mathbb{N}}/U$ has "nonstandard prime ideals" given by the kernels of quotient maps $R^{\mathbb{N}}/U \to (\prod R/P_i)/U$ to ultraproducts, where $P_i$ is a sequence of prime ideals; see Eric Wofsey's answer linked above for a worked example of this for $R = \mathbb{Z}$. There are even more complicated examples that I don't understand, but again see Eric's answer. |
What ring-sum of vector spaces can possibly mean? | The symbol $\oplus$ denotes the direct sum of (here) vector spaces; in this context, as $U,W$ are said to be subspaces of $V$, it denotes the internal direct sum, i.e., $U\cap W=\{0\}$ and $U,W$ together span $V$. |
When is the extension $\Bbb{Q}(\sqrt{\sqrt{5} +a})/\Bbb{Q}$ Normal? | $\Bbb{Q}(\sqrt{\sqrt{5} +a})/\Bbb{Q}$ is a normal extension iff the extension is Galois iff the order of the Galois group is 4. That means the Galois group is either isomorphic to $C_2\times C_2$ or to $C_4$. Using well known criteria for Galois groups of quartic polynomials shows the first case occurs when $a^2-5$ is a square and the second case when $5(a^2-5)$ is a square.
$a=3$ is the first example of $C_4$ and $a=5$ the first example of $C_2\times C_2$. Those 2 values and $a=85$ are the only cases of $a<100$ with Galois group $C_2\times C_2$ or $C_4$. |
Bilinear product of two non-zero vectors is non-zero? | We can find bilinear operators where the product of nonzero vectors is zero. A trivial example would be the function that maps everything to zero—I think that satisfies all of the bilinearity axioms.
A second example is the dot product: the dot product in $\mathbb{R}^2$ is a bilinear operator, and there exist pairs of perpendicular nonzero vectors. Their dot product is zero, but neither vector is the zero vector.
We can also build other examples. Suppose your vector space is $\mathbb{R}^2$. Let $e_1, e_2$ be an orthonormal basis, and define a bilinear operator by:
$$B(v,w) = (v\cdot e_1)(w\cdot e_2)$$
Essentially: take the dot product of the first vector with $e_1$, and the second vector with $e_2$, and multiply the resulting numbers together.
You can show that this is a bilinear operator because dot products are linear.
However, $e_1$ and $e_2$ are nonzero vectors where $B(e_2, e_1) = 0$.
One useful related property that bilinear operators can have is nondegeneracy: a bilinear operator is nondegenerate if the statement $\forall y, B(x,y)=0$ holds only when $x=0$.
The dot product in $\mathbb{R}^2$ is a bilinear operator with this property: if $x$ is a vector where for every $y \in \mathbb{R}^2$, $x \cdot y = 0$, then $x$ is necessarily the zero vector. |
Supersets of unit ball in infite dimension | In normed space, a closed subset of a compact set must be compact. And we know that in infinite-dimensional Banach space the closed unit ball is not compact. |
Special Case of A.M-G.M Inequality. | $$\mu=\frac{x_1+x_2+\cdots+x_n}{n}$$
$$l=\sqrt[n]{x_1\,x_2\,\cdots\,x_n}$$
we know $e^x>1+x$. For $i=1,2,\cdots,n$ set
$$x=\frac{x_i}{\mu}-1$$
we have
$$\large e^{\frac{x_i}{\mu}-1}\ge \frac{x_i}{\mu} $$
therefore
\begin{align}
& \prod\limits_{i=1}^{n}{{{e}^{\frac{{{x}_{i}}}{\mu }-1}}}\ge \prod\limits_{i=1}^{n}{\frac{{{x}_{i}}}{\mu }}\,\,\,\,\,\Rightarrow \,\,\,{{e}^{-n+\sum\limits_{i=1}^{n}{\frac{{{x}_{i}}}{\mu }}}}\ge \frac{\prod\limits_{i=1}^{n}{{{x}_{i}}}}{{{\mu }^{n}}} \\
\end{align}
and
$$1\ge \frac{{{l}^{n}}}{{{\mu }^{n}}}\,\,\,\,\,\Rightarrow \,\,{{\mu }^{n}}\ge {{l}^{n}}\Rightarrow \mu \ge l$$ |
Prove $\frac{n !}{(n-k) !} \cdot k^{n-k}$ | Yes, it’s correct, apart from the notational problem that Fawkes4494d3 noted in the comments and fixed. You could also prove the result combinatorially. Suppose that you have players numbered from $1$ through $n$ and teams numbered from $1$ through $k$. The lefthand side is the number of ways to assign the $n$ players to the $k$ teams and then choose one player on each team to be captain. Thus, it is the number of ways to assign the $n$ players to $k$ numbered teams and assign each team a captain.
The factor of $\frac{n!}{(n-k)!}$ on the righthand side is the number of ways of choosing a captain for each of the $k$ teams, and $k^{n-k}$ is the number of ways to assign each of the remaining $n-k$ players to one of the $k$ teams, so the righthand side is also the number of ways to assign the $n$ players to $k$ numbered teams and assign each team a captain. |
Getting singular solution from parametric solutions of an ODE | I agree with your parametric solution, except that absolute values should be in the logarithm :
$$\begin{cases}x=\frac{\ln|p|}{p^2}+\frac{c}{p^2}\\
y=2\frac{\ln|p|}{p}+\frac{2c+1}{p}\end{cases}$$
The trouble comes from the singular point which exists on each curve.
For several values of $c$ the curve corresponding to the function $y(x)$ is drawn on the next graph:
The envelope of the family of curves is drawn in red. The equation is :
$$y^2=8x$$
But the envelope is not tangential to the curves because the singular points are on the envelope. In this special case, the equation of the envelope is not solution of the ODE. That is why you don't obtain a singular solution. |
Congruence Equation $3n^3+12n^2+13n+2\equiv0,\pmod{2\times3\times5}$ | Work first modulo $2$. We can rewrite the congruence as $n^3+n\equiv 0\pmod{2}$. Note that this always holds.
Now work modulo $3$. The congruence can be rewritten as $n+2\equiv 0\pmod{3}$. This holds precisely if $n\equiv 1\pmod{3}$.
Finally, work modulo $5$. The congruence can be rewritten as $3n^3-3n^2+3n-3\equiv 0\pmod{5}$, and then as $(n-1)(n^2+1)\equiv 0\pmod{5}$. This has the solutions $n\equiv 1\pmod{5}$, $n\equiv 2\pmod{5}$ and $n\equiv 3\pmod{5}$.
So the conditions come down to $n\equiv 1\pmod{3}$ and $n\equiv 1$, $2$, or $3$ modulo $5$.
The solutions are therefore $n\equiv 1$, $n\equiv 7$ and $n\equiv 13\pmod{15}$. |
Determining if $ I_n = \int_0^{\sqrt{\pi n}} \sin(t^2) dt $ is a Cauchy Sequence | Write $$I_n=\int_0^{n\pi}\frac{\sin(t)}{2\sqrt{t}}\,dt$$Then we have
$$\begin{align}
\left|I_n-I_m\right|&=\left|\int_{m\pi}^{n\pi}\frac{\sin(t)}{2\sqrt{t}}\,dt\,\right|\\\\
&=\frac12\,\left| \sum_{k=m}^{n-1}\int_{k\pi}^{(k+1)\pi}\frac{\sin(t)}{\sqrt{t}}\,dt\,\right|
\end{align}$$
Then, note that for $t\in[k\pi,(k+1)\pi]$, $\sin(t)$ is non-negative for even values of $k$ and non-positive for odd values of $k$.
Heuristically, for large $k$, we have
$$\int_{k\pi}^{(k+1)\pi}\frac{\sin(t)}{\sqrt{t}}\,dt\approx \frac{1}{\sqrt{k\pi}}\int_{k\pi}^{(k+1)\pi}\sin(t)\,dt= (-1)^k\frac{2}{\sqrt{k\pi}}$$
Inasmuch as $\lim_{m,n\to \infty}\sum_{k=m}^{n-1}\frac{(-1)^k}{\sqrt{k\pi}}=0$ (Leibniz's Test for alternating series), the sequence $I_n$ is indeed Cauchy. |
Differentiable at the orgin and plot of a function. | $1)$ It is not differentiable at (0,0). The function does not exist at this point. When you plug in (0,0) you get$\frac{0}{0}$ which is undefined. If you are still not convinced you can take the partial derivative of $f(x,y)$ with respect to $x$ and with respect to $y$ and plug in (0,0):
$$
\frac{\partial f}{\partial x}=\frac{5 x^2 y^3 - y^5}{(x^2 + y^2)^{5/2}}
\\\frac{\partial f}{\partial y}=\frac{x^5 - 5 x^3 y^2}{(x^2 + y^2)^{5/2}}
$$
Now, plug in (0,0) and again in both cases you will get $\frac{0}{0}$ which means the slope of the tangent line is undefined and $f(x,y)$ is not differentiable at (0,0).
$2)$ I could not come up with a convenient way of plotting $f(x,y)=\frac {x^{3}y}{x^{6}+y^{2}}$ without using any software. The only thing I can say about it is that its domain is $x^6 +y^2>0$. This is how Function $f(rcos \theta,rsin\theta)$ will look like: $$f(rcos \theta,rsin\theta)=\frac{r^4cos^3\theta sin\theta}{(r^6cos^6\theta + r^2 sin^2\theta)}$$
and its graph: |
Closed form for a 2D random walk probability distribution | First a correction, if you have the pdf written correctly, the standard deviation should be $\sqrt{N}$ and the variance is $N$.
You are correct in your assertion that in each direction you will still have the same Normal form. So the pair $\begin{bmatrix}{X\\Y}\end{bmatrix}$ will be a Bivariate normal distribution with covariance matrix:
\begin{equation}
\Sigma = \begin{bmatrix}
N & 0 \\
0 & N
\end{bmatrix}
\end{equation}
If you would like to model the exact location after some steps, you should look into the Bivariate normal pdf.
If you only care about distance from the origin, then you may do what you were doing with $r$. That is
\begin{equation}
R = +\sqrt{X^2 + Y^2}
\end{equation}
Unfortunately, this may not have a particularly nice form. Here are some facts that may help. If $X \sim Normal$ then $X^2$ will have a (possibly non-central) Chi-Square distribution. Also, the sum of Chi-Square rv's is itself chi square. Finally, the square root of a Chi-Square rv has a "Half Normal" distribution. I'm not familiar enough with the distribution to know the details, but I imagine it could be done. |
Rapid changes in signal correspond to high frequencies - proof | I don't think there is a straightforward way to show it mathematically without recurring to the core of Fourier Analysis theory, but let us try to do it by inspection and analogy.
At first, keep in mind that the $\sin$ and $\cos$ are probably the smoothest, derivation/integration friendly continuous functions in whole maths.
Then, before concerning about aperiodic signals, let's think about the periodic ones, that can be fairly described by the Fourier Series.
For me, it seems intuitive from the definion and from the concept of the Fourier Series that if a periodic signal is compound of a sum or multiplication of sines and cosines, then you don't even need to compute the series, just write down the Euler form of the sines and cosines in the signal, do some algebra and you automatically find the coefficients.
Now, imagine the simplest infinite, periodic, discontinuous signal we usually deal with: the square wave... We both know that if we plot the fourier series coefficients for that waveform we will find something that resembles a $sinc$ function, which is an infitie function, therefore with infinitely many harmonics.
The same we see for sawtooths, triangles and so on, until we arrive to the discontinuity itself: the dirac delta function or an impulse train... The fourier series contains exactly all possible harmonics up to infinity.
Now, to contrast with the real world: Imagine we have an 1980's, unideal function generator and set it to output a square wave. We know from theory that an ideal square wave is discontinuous (the transition has infinite slope) and it's Fourier Series has infinitely many harmonics. However, physically, our old-school generator creates the transition from $0V$ to $1V$ as a very fast ramp with a finite slope and with no sharp edge between the ramp and the levels. Then, if we analyze this output with a TOP SECRET, up-to-infinity badass spectrum analyzer, we would see that at a determined finite number of harmonics, the next coefficients would be surely and precisely zero.
In that case, as a rule of thumb, we can naïvely say that the number of harmonics is "proportional" to the steepest slope in the function.
Finally, the same concept applies for aperiodic signals and the Fourier Transform.
Hope it was helpful. |
Maximum Area of a Rectangular Trough | Hint: let $h$ be the height of each side and $w$ be the width of the bottom. Can you write an equation based on the width of the strip? Can you write one for the area of the cross section? Solve the first for one variable, plug it into the second, and you will have an equation for the area that depends upon one variable. Take the derivative, set to zero.... |
Show the point-set map is open and connected. | Note que podemos escrever $\varphi(x)$ = graph $(\varphi) \cap \{x\} \times X $ que é um conjunto aberto. Assim sendo, como $\varphi(C) = \bigcup_{x \in C} \varphi(x)$ é uma união de abertos, segue que $\varphi(C)$ é aberto.
Suponha, por absurdo, que $\varphi(C)=Y$ não é conexo. Portanto existem abertos disjuntos $A$ e $B$ não vazios tais que $Y=A \cup B$. É claro que, $\varphi^{-1}(A) = \{ x \in C ; \varphi(x) \subseteq A\}$ e $\varphi^{-1}(B) = \{ x \in C ; \varphi(x) \subseteq B \}$ são não vazios (caso contrário $C=B, C=A, \; \mbox{ou} \; C=\varnothing$) e a união é todo o conjunto $C$. Vamos mostrar que $\varphi^{-1}(A)$ e $\varphi^{-1}(B)$ são disjuntos. Novamente por contradição, suponha que exista $x \in C$ tal que $x \in \varphi^{-1}(A) \cap C$ e $x \in \varphi^{-1}(B) \cap C$. Como $A$ e $B$ são disjuntos, então $\varphi(x)$ é não conexo, pois $\varphi(x) \in A$ e $\varphi(x) \in B$ que são disjuntos. Contradição, pois por definição de aplicação aberta, temos que $\varphi(x)$ é conexo $\forall x \in X$. Segue que $C= \varphi^{-1}(A) \cup \varphi^{-1}(B)$ uma união disjunta e não vazia, o que contraria o fato de $C$ ser conexo. Então $\varphi(C)$ é conexo. |
Expansion of $n(n-1)(n-2)...(n-k)$ | Yes there is a way, i think vieta's formulae is what you are looking for.
https://en.m.wikipedia.org/wiki/Vieta%27s_formulas |
Prime number inequality | The inequality deals with the primorial function
$$
p_n\#=2\cdot3\cdot5\cdots p_n=\prod_{p\le p_n,\ p\in \mathbb{P}}p,
$$
where $p_n$ is the $n$th prime.
Asymptotically we have the result that
$$
\lim_{n\to\infty}\frac{\ln p_n\#}{p_n}=1.
$$
Early on the primorials are bit smaller though. For example $\ln(59\#)\approx49$.
Consider the following problem. Assume that $xy=p_n\#$ and that $x-y\ge3$ (in OP $x,y$ were constrained to be integers of opposite parity such that $x-y>1$ implying that $x-y\ge 3$). Therefore
$$x^m-y^m\ge x^2-y^2=(x-y)(x+y)\ge3(x+y).$$
Here by the AM-GM inequality $x+y\ge2\sqrt{xy}=2\sqrt{p_n\#}.$ Therefore asymptotically we get a lower bound
$$
x^m-y^m\ge 6\sqrt{p_n\#}\ge6e^{\frac n2(1+o(1))}.
$$
Asymptotically we also have have $p_n\approx n\ln n.$ This suggests that
$$
\frac{\ln(x^m-y^m)}{\ln p_n}\ge \frac n{2\ln n} K(n),
$$
where $K(n)$ is some correction factor (bounded away from zero) that I won't calculate.
Your result says (using only the main term $p_n^2$) that
$$
\frac{\ln(x^m-y^m)}{\ln p_n}\ge 2.
$$
So asymptotically it is weaker. But it would not be fair to call your result trivial because of this. I'm not a number theorist, but I have seen simpler estimates being derived in many number theory books, and in addition to being fun, they pave the road to stronger results.
Please share details of your argument with us, so that we can comment and give you other kind of feedback! |
How do I calculate this error? | A common one is the "Relative Percent Difference," or RPD, used in laboratory quality control procedures. Although you can find many seemingly different formulas, they all come down to comparing the difference of two values to their average magnitude:
$$d_1(x,y)=\frac{2(x−y)}{|x|+|y|}.$$
This is a signed expression, positive when $x$ exceeds $y$ and negative when $y$ exceeds $x$. Its value always lies between $−2$ and $2$ .
So you can use
$$d(x,y)=\left|\frac{Experimental−Theoretical}{|Experimental|+|Theoretical|}\right|,$$
to acheive a value between $0$ and $1$.
Also, a Wikipedia article on Relative Change and Difference observes that
$$d_\infty(x,y)=\frac{|x−y|}{max(|x|,|y|)}$$
is frequently used as a relative tolerance test in floating point numerical algorithms.
So you can use
$$d(x,y)=\frac{|Experimental−Theoretical|}{max(|Experimental|,|Theoretical|)},$$
to acheive a value between $0$ and $1$. |
Is the sheaf of locally constant functions flasque? | No, take two disjoint open sets $U$ and $V$ lying in the same connected component $X_0$ of the entire space $X$. Then define a section on $U \cup V$ by the function being $0$ on $U$ and 1 on $V$. Then this section cannot extend to $X$. |
Integral $\int_{-\pi/2}^{\pi/2} \frac{e^{|\sin x|}\cos x}{1+e^{\tan x}}$ | with $x=-t$ $$I=\int_{-\pi/2}^{\pi/2} \frac{e^{|\sin t|}\cos t}{1+e^{-\tan t}}\mathrm dt$$ sum this with the initial integral and notice that $e^{|\sin x|}\cos x$ is an even function, therefore: $$I=2\frac12\int_{0}^{\pi/2} e^{\sin x} \cos x \mathrm dx$$ because $\sin x$ is positive for $x\in [0,\frac{\pi}{2}], $now just substitute $\sin x= u$ to get: $$I=\int_0^1 e^u du=e-1$$ |
With Euler's method for differential equations, is it possible to take the limit as $h \to 0$ and get an exact approximation? | Not really. Recall that the Euler method for $y^\prime=f(x,y)$ takes the form
$$\begin{align*}x_{k+1}&=x_k+h\\y_{k+1}&=y_k+hf(x_k,y_k)\end{align*}$$
for some stepsize $h$. Taking $h=0$ here is equivalent to not moving at all!
However, your idea can be made slightly more practical. Consider the application of the Euler method with stepsize $h/2$:
$$\begin{align*}x_{k+1/2}&=x_k+h/2\\y_{k+1/2}&=y_k+hf(x_k,y_k)/2\\x_{k+1}&=x_{k+1/2}+h/2\\y_{k+1}&=y_{k+1/2}+hf(x_{k+1/2},y_{k+1/2})/2\end{align*}$$
The value of $y_{k+1}$ from these two steps can usually be expected to be a bit more accurate that the value of $y_{k+1}$ from the $h$-step Euler method. We can keep playing the game, taking 4 steps with stepsize $h/4$, 8 steps with stepsize $h/8$, and so on, yielding a sequence of estimates for $y_{k+1}$ corresponding to decreasing $h$.
One way one might estimate the result of what happens when $h\to 0$ is to take all those estimates of $y_{k+1}$ along with the associated stepsizes, and then fit an interpolating polynomial to those. For example, taking $y_{k+1}^{(0)}$ to be the result for stepsize $h$, $y_{k+1}^{(1)}$ the corresponding result for stepsize $h/2$, and $y_{k+1}^{(2)}$ the result for stepsize $h/4$, one can fit a quadratic interpolating polynomial to the three points $\{(h,y_{k+1}^{(0)}),(h/2,y_{k+1}^{(1)}),(h/4,y_{k+1}^{(2)})\}$, and then estimate the limit as $h\to 0$ by evaluating the interpolating polynomial thus obtained at 0.
This scheme of using the interpolating polynomial to estimate the limit to 0 is called Richardson extrapolation. In practice, one certainly takes more than three points for the interpolation, and the order of the interpolating polynomial needed is estimated based on the behavior at past points. The idea of using Richardson extrapolation is due to Roland Bulirsch and Josef Stoer. The Bulirsch-Stoer method discussed here uses a slightly different method for integrating the differential equation (the modified midpoint method) from which the extrapolations are built (as well as a slightly modified extrapolation method), but is essentially the same idea as presented here.
Here's a tiny Mathematica demonstration of the Bulirsch-Stoer idea:
Table[(Apply[List,
y /. First[
NDSolve[{y'[x] == x - y[x], y[0] == 1}, y, {x, 0, 1},
MaxStepFraction -> 1, Method -> "ExplicitEuler",
InterpolationOrder -> 1, StartingStepSize -> 2^-k]]])[[4,
3, -2]], {k, 4}] - 2/E
{-0.235759, -0.102946, -0.0485411, -0.0236106}
Here we tried the Euler method on the differential equation $y^\prime=x-y$ with initial condition $y(0)=1$ for the stepsizes $h=1/2,1/4,1/8,1/16$, and compared the result at $x=1$ with the true value $y(1)=2/e$. As you can see the accuracy isn't too good for any of these.
Here's what happens after Richardson extrapolation using those same results from Euler:
InterpolatingPolynomial[
Table[{2^-k, (Apply[List,
y /. First[
NDSolve[{y'[x] == x - y[x], y[0] == 1}, y, {x, 0, 1},
MaxStepFraction -> 1, Method -> "ExplicitEuler",
InterpolationOrder -> 1, StartingStepSize -> 2^-k]]])[[4,
3, -2]]}, {k, 4}], 0] - 2/E
0.0000823142
We end up with a result good to three or so digits, much better than even the result of Euler corresponding to $h=1/256$. Pretty good, I would say, considering that only $2+4+8+16=30$ evaluations of $f(x,y)=x-y$ were needed for a result with three-digit accuracy, while Euler with $256$ steps (and thus $256$ evaluations of $f(x,y)$) can only manage two good digits.
As an aside, Mathematica implements Bulirsch-Stoer internally in NDSolve[], as the option Method -> "Extrapolation". |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.