title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
A version of the Riemann existence theorem | Using Riemann Roch, for example, you can find an embedding of $X$ in some $\Bbb P^N$. Similarly, for sufficiently large $k\in\Bbb N$, the line bundle (invertible sheaf) $\mathscr F\otimes \mathscr O(k)$ has a nontrivial global holomorphic section $s$. Letting $\sigma$ be a nontrivial section of $\mathscr O(k)$, i.e., a homogeneous polynomial of degree $k$ (restricted to $X$), $s/\sigma$ is a global meromorphic section of $\mathscr F$. |
Show that given $\epsilon > 0,$ there exist $N>0$ and $M>0$ so that $\int_{\{x:|x|>N\}} fM\}} f<\epsilon.$ | Here, I will assume a slightly general version, namely when $f:\Bbb R\to \Bbb R\cup\{\pm \infty\}$ is Lebesgue integrable.
Consider $f_n:=\chi_{\{x\in \Bbb R:|x|>n\}}\cdot f$, then $f_n\to 0$ pointwise a.e. and $|f_n|\leq |f|$. So, Lebesgue Dominated convergence theorem gives $$\lim\int f_n =0.$$
Next, consider $g_n:= \chi_{\{x\in \Bbb R:|f(x)|>n\}}\cdot f$, then $g_n\to 0$ pointwise a.e. Note that we are using the fact here that $f$ is integrable implies $m\big(|f|^{-1}(\infty)\big)=0$, in case $f$ only takes real value then it is obvious. Also, $|g_n|\leq f$. So, Lebesgue Dominated convergence theorem gives $$\lim\int g_n =0.$$ |
Hopf map by complex numbers | The Hopf map is really only defined up to some choice of cosmetic tweaking. Given any two rotations $R$ and $S$ of $\mathbb{R}^4$ and $\mathbb{R}^3$, you can swap one Hopf map $h:S^3\to S^2$ with another one given by the composition $S\circ h\circ R^{-1}$. In particular, the relationship between your two Hopf maps is that $c$ and $d$ are switched, which amounts to an improper reflection in the domain $S^3$. Mathworld has yet a different incarnation of the formula with coordinates permuted.
(Also one can perform a conformal automorphism of the Riemann sphere $\mathbb{C}\cup\{\infty\}$ instead of a rotation of the sphere $S^2$, since they amount to the same thing.) |
Inequality involving Complex Polynomials | You were on the right path. $g = q/p$ is holomorphic in
$1 \le |z| < \infty$, with a removable singularity at
$z = \infty$, and therefore is bounded by its values on $|z| = 1$.
If you feel uncomfortable with a holomorphic continuation at $z=\infty$
then consider
$$
h(z) = \frac{q(1/z)}{p(1/z)}
$$
instead, which has a removable singularity at $z=0$.
(Note that it suffices to require $\deg q \le \deg p$
instead of $\deg q = \deg p$.) |
$L=\left\{awb\mid a,b\in \Sigma \wedge a<b \wedge w \in \Sigma^*\right\}$ find NFA that accepts L | Label the arrows as below:
$q_0\rightarrow q_1$ as $1$
$q_0\rightarrow q_2$ as $1, 2$
$q_0\rightarrow q_3$ as $1, 2, 3$
$q_1\rightarrow q_4$ as $2$
$q_2\rightarrow q_5$ as $3$
$q_3\rightarrow q_6$ as $4$
and $q_1, q_2, q_3$ have self loops labelled $1, 2, 3, 4$. |
$G$ be a group and $H$ be a normal subgroup of index $p$ ( a prime ) ; suppose $K$ is a subgroup of $G$ not contained in $H$ , then $G=HK$? | By Lagrange's,
$$[G : HK][HK:H] = [G:H] = p$$
since $K \not\subset H$, $[HK:H] > 1 \implies [HK:H] = p$, so $[G:HK] = 1 \implies G = HK$. |
Dedekind's "different": sources, definition, original name | See :
Nicolas Bourbaki, Elements of the History of Mathematics, (French ed, 1984), page 102 :
in 1882, Dedekind completes the theory [of ideals] by introducing the different, which gives him a new definition of the discriminant and allows him to specify the exponents of the prime ideal factors in the decomposition of the latter.
In the German text of : Richard Dedekind, Über die Discriminanten endlicher Körper, 1882, Abhandlungen der Königlichen Gesellschaft der Wissenschaften zu Göttingen 29 (2): 1–56, see page 1 for : "Grundzahl oder Discriminante" and page 38 for : "Grundideal des Körpers". |
Confusing probability question, please help me | We have to determine the probability that a random word of length $4$ over the alphabet $\{{\tt a,\ldots,z,A,\ldots, Z,0,\ldots 9}\}$ satisfies certain conditions. There are $62^4=14\,776\,336$ equiprobable words.
All admissible words consist of four different symbols. Two of the symbols have to be alphabetical and two of them have to be numerical. I assume that pairs of type $\{{\tt a, A}\}$ are forbidden. We then can choose two letters from the alphabet in ${26\choose2}=325$ ways and then realize each of these choices as symbol pair in $4$ ways: $\{{\tt a, b}\}$,$\{{\tt a, B}\}$,$\{{\tt A, b}\}$,$\{{\tt A, B}\}$. The numerical pair can be chosen in ${10\choose2}=45$ ways. When the four symbols have been chosen we can arrange them in $4!=24$ ways. It follows that there are $325\cdot4\cdot45\cdot24=1\,404\,000$ admissible words.
The probability $p$ in question then comes to
$$p={1\,404\,000\over14\,776\,336}\doteq0.0950\ .$$ |
Prove $\lim_{x\to 0^+} 1/x = +\infty$ and $\lim_{x\to 0^-} 1/x = -\infty$ | Consider $\lim_{x \to 0^+} 1/x$. Given any $M \in \mathbb{R}^+$, for all $x \in (0,1/M)$, we have $1/x > M$. Hence, $$\lim_{x \to 0^+} 1/x = + \infty$$
Similarly, adapt the proof for the other case. |
Is there an established notation for $A_1A_2\dots A_n$, a product of matrices? | Summarizing the discussion:
In contrast to sigma/product notation, it seems $A_1\dots A_n$ or $A_n\dots A_1$ are generally seen as acceptable and clear.
Using bodil's terminology of left/right products, $\prod_{k=1}^{n} A_k$ is probably an acceptable way to write the left product $ A_n\dots A_1$, even if it doesn't save too many characters. However, if the $A_i$ have a special form, such as $A_i = B_i +\lambda_i I$, the product notation might be better.
Regarding right products, the most common option seems to be either the standard dot-dot-dot notation or $\prod_{k=1}^{n} A_{n-k+1}$. Several people also suggested swapping the limits, viz. $\prod_{k=n}^{1} A_k$. greg had another suggestion: $A_1\cdots A_n = \left(\prod _{k=1}^{n}A_k^T\right)^T$. $\coprod$ was not well-received. |
Deformation Retract of Complement of Two Linked Circles in $\mathbf R^3$ | This is a partial answer.
To see the deformation retraction, see the flow lines in the diagram below. It shows the flow lines in the vertical and horizontal slices. For the intermediary slices, we gradually move from horizontal slice to the vertical slice.
(In the diagram, dashed circles are the slices of the torus; bold circles and points are the slices of the removed circles). |
Prove $R$ has no nonzero nilpotent elements iff $x^2=0$ implies $x=0$. Then show idempotents are central in such a ring. | Suppose $x^n=0$ with $n>1$. We can always find $1\le m<n$ such that $x^m=0$. By induction, this shows that $x=0$.
If $n$ is even, this follows from the hypotheses in (b).
If $n$ is odd, then $x^{n+1}=0$, so $x^{\frac{n+1}{2}}=0$ with $\frac{n+1}{2}<n$ as $1<n$.
Also, we say an idempotent is central if it is in the center of $R$, i.e. if it commutes with all elements in the ring. The implication that any of these equivalent conditions implies all idempotents are central can be found here.
Hope this helps. |
$360\times 60$ nautical mile is not equal with $6400$ km of earth radius | You need to divide by $2\pi$ to get the radius. You have calculated the circumference. |
Is sum of digits of $3^{1000}$ divisible by $7$? | I am not answering the question but the post asks for clues so here it is a couple of ideas.
If $a_0,a_1,a_2, \cdots , a_{477} $ are the decimal digits of $3^{1000}$ then the numbers
$$b_i=a_{6i}+a_{6i+1}\cdot 10+a_{6i+2}\cdot 10^2+a_{6i+3}\cdot 10^3+a_{6i+4}\cdot 10^4+a_{6i+5}\cdot 10^5$$
for $i=0,1,2,\cdots, 79$ are the digits of $3^{1000}$ in base $10^6$ ($80$ is the nearest integer above $477 \div 6$ so there are $80$ digits numbered $0,1,2,\cdots,79$)
In other words:
$$ 3^{1000} = b_0 + b_1 \cdot 10^6+ \cdots + b_{79} \cdot (10^6)^{79}$$
Now, if we resort to modular arithmetic we see that
$$ 3^0=1, 3^1=3 , 3^2=2, 3^3=6, 3^4=4, 3^5=5, 3^6=1 $$ (all the equalities taken modulo $7$).
Also $$3^{1000}=(3^6)^{166}\cdot 3^4= 1 \cdot 4 = 4$$(all the equalities taken modulo $7$).
Now if we note that $10^6=1$ (modulo 7) the expression of $3^{1000}$ in base $10^6$ reads (modulo 7)
$$ 4=b_0+b_1+ \cdots +b_{79}$$
So we can assert that the sum of digits of $3^{1000}$ in base one milion gives a remainder of $4$ when divided by $7$.
Another partial result comes from the decimal expansion read modulo 7:
$$3^{1000}= a_0+ a_1 \cdot 10 + \cdots +a_{477} \cdot 10^{477} =1 = a_0+3 a_1+ 2a_2 + 6 a_3 + 4 a_4 + 5 a_ 5 + \cdots $$
So, given that $a_0=1$ we can say that this particular linear combination of the remaining digits is divisible by $7$. |
Solving diff. equations involving vector product | Since
$$\frac{d}{dt} (\dot r -a\times r)= \ddot r -a\times \dot r=0$$
the expression $\dot r -a\times r$ is constant in time. Denote it by $b$; done.
In part (c), the additional assumption delivers $b\perp a$, which makes it possible to write $b$ as the cross-product of $a$ with another vector.
In part (d), you differentiate the quantity which you want to prove constant, and use the ODE:
$$\frac{d}{dt}((r+c)\cdot (r+c)) = 2 \dot r \cdot (r+c) =0$$
because $\vec r$ is the cross product of something with $r+c$.
And so on... |
What is meant by "The set of rational numbers is not closed when taking limits"? | The first statement means that you might have a sequence of rational numbers which converge to a limit, but the limit is not rational. E.g.,
here's a sequence of rational numbers:
$3, 3.1, 3.14, 3.141, 3.1415, 3.14159, ....$
in which each term is the first few digits of $\pi.$ The limit of this sequence is $\pi$, which is not rational.
The second statement says that the real numbers don't have this "flaw." Every convergent sequence of rational numbers (or for that matter, real numbers) converges to a real number. So the reals ARE closed under limits. |
Orthogonal vectors and complex numbers | Let $\mathbf{x}$ and $\mathbf{y}$ be $n\times 1$ orthogonal vectors, meaning that $\mathbf{x}^T\mathbf{y} =\mathbf{y}^T\mathbf{x} = 0$. If $a\mathbf{x} + b\mathbf{y} = \mathbf{0}$ then $a\mathbf{x}^T + b\mathbf{y}^T = \mathbf{0}^T$. Multiply both sides by $\mathbf{y}$, and because $\mathbf{x}^T \mathbf{y} = 0$ we obtain $a\mathbf{x}^T\mathbf{y} + b\mathbf{y}^T\mathbf{y} = b \mathbf{y^T} \mathbf{y} = 0$. Because $\mathbf{y}$ is non-zero, $\mathbf{y}^T\mathbf{y} = \sum_{i=1}^n y_i^2 > 0$ and so $a\mathbf{x} + b\mathbf{y} = 0$ only if $b = 0$. If $b = 0$ then $a\mathbf{x} = \mathbf{0}$ and because $\mathbf{x} \neq \mathbf{0}$, $a = 0$ completing the proof. |
Need help in deducing 2 arguments related to separable extension | Following your given definition and notation. Let $u \in F$ and $u^{p^n}=:a \in K$. Then $u$ is a root of $f:=(x-u)^{p^n}=x^{p^n}-u^{p^n}= x^{p^n}-a= \in K[x]$ (the first equality holds since we are in characteristic $p$). Now the minimal polynomial of $u$ divides $f$ so it is of the form $(x-u)^m$ for some $m\leq p^n$ but this means that $u$ is purely inseparable over $K$. Since $u$ was arbitrary this means that $F$ is purely inseparable over $K$.
Edit: Here is a more detailed sketch for part (c):
If $u \in F$ is purely inseparable over $K$ then the field extension $K(u)$ is purely inseparable over $K$. This is because $u^{p^n} \in K$ for some $n$ and any element $u'$ in $K(u)$ can be written as $g(u)$ for some $g \in K[x]$ but then $u'^{p^n}=g(u)^{p^n}=g(u^{p^n}) \in K$.
If $u \in F$ is purely inseparable over an intermediate field $L$ and $L$ is purely inseparable over $K$ then $u$ is already purely inseparable over $K$. This is because $u^{p^n} \in L$ for some $n$ and so since $L$ is purely inseparable there is some $m$ such that $(u^{p^n})^{p^m}=u^{p^{m+n}} \in K$
Finally, let $F$ be generated by some set $U$ of (over $K$) purely inseparable elements and $u \in F$. Then there is a finite subset $U'=\{u_1,...,u_k\}$ such that $u$ lies already in the field extension generated by $U'$. By (1.) the field extension $K[u_1,...,u_l]$ ($l \leq k$) is purely inseparable over $K[u_1,...,u_{l-1}]$ so by (2.) and a trivial induction it is actually already purely inseparable over $K$. Since $u \in K[u_1,...,u_k]$ this means $u$ is purely inseparable over $K$ which proves the assertion. |
Divergence chain rule | $$\eqalign{\vec{\nabla} \cdot f(g(\vec{x})) &= \frac{\partial}{\partial x_1} f_1(g(\vec{x})) + \frac{\partial}{\partial x_2} f_2(g(\vec{x}))\cr
&= D_1(f_1)(g(\vec{x})) D_1(g_1)(\vec{x}) + D_2(f_1)(g(\vec{x}))D_1(g_2)(\vec{x}) + D_1(f_2)(g(\vec{x})) D_2(g_1)(\vec{x}) + D_2(f_2)(g(\vec{x}))D_2(g_2)(\vec{x})}$$
where $D_i$ is derivative with respect to the $i$'th variable. |
Are functions which take on the values $\pm \infty$ considered unbounded? | I can think of two definitions of boundedness that could be used here.
Boundedness on ordered sets, where a set is bounded if there exist an element greater than or equal to every element in the set, and same for lesser than.
This concept is useless on the extended reals, as all sets will be bounded.
Boundedness on metric spaces, where a set is bounded if it can be contained in a ball.
The extended reals is not a metric space for the usual metric on the real numbers, but there exists metrics that turn it into a metric space. But then the distance from $-\infty$ to $\infty$ must be some finite number, and the whole extended real line must be contained in a ball of this radius (any sane metric on the extended reals should have these two numbers furthest away from each other).
Again, the concept of boundedness is uninteresting, as all sets are bounded.
In general if someone talked about sets being bounded, I would therefore assume they are restricting themselves to the definition of boundedness on the reals.
For your example though, if we restrict to the reals the function will not be defined at $0$, so it would be bounded on its domain. So for us to say it was unbounded we would have to specifically define what we mean with unboundedness on the extended reals in such a way that anything containing $\infty$ would be unbounded. |
Proof that vector space $C([a,b])$ of all functions from $[a,b]\to \mathbb{R}$ is infinite dimensional? | Your supposed definition of infinite dimensional vector space doesn't make sense. A vector space $V$ is infinite dimensional if no finite subset of $V$ generates $V$. This is equivalent to the assertion that $V$ has an infinite subset which is linearly independent.
The space $\mathcal{C}\bigl([a,b]\bigr)$ is infinite dimensional because the set $\{1,x,x^2,x^3,\ldots\}$ is linearly independent. |
MLE mean and variance for Gaussian Univariate | When you say "added", do you mean you found the actual sums, as follows?
$$
\begin{array}{r}
0.42 - 0.087 \\
-0.2-3.3 \\
1.3-0.32 \\
0.39 + 0.71 \\
\vdots\quad\qquad{}
\end{array}
$$
If so, that doesn't get you the estimates for the two-dimensional Gaussian distribution. What you need is the estimated mean and variance, and the variance is a $2\times 2$ matrix, sometimes called the "covariance matrix" because its entries are covariances (in particular, its diagonal entries are variances). Notice the pair $(\mu,\Sigma)$. The expected value $\mu$ is normally (no pun intended) thought of as a $2\times1$ column vector. Its entries will be just the average for the first column in your table and the average for the second column. The matrix
$$
\Sigma = \begin{bmatrix} \sigma_{11} & \sigma_{12} \\ \sigma_{12} & \sigma_{22} \end{bmatrix}
$$
has as entries the variance $\sigma_{11}$ of the first scalar-valued random variable, the variance $\sigma_{22}$ of the second one, and the covariance $\sigma_{12}$ between them.
The MLE for $\sigma_{11}$ is
$$
\frac{1}{10}\sum_{i=1}^{10} (x_{1i} - \bar{x}_{1\bullet})^2
$$
where $$\bar{x}_{1\bullet}= \frac{1}{10}\sum_{i=1}^{10} x_{1i}$$
is the sample average for the first column. (This differs from the conventional unbiased estimate in that the denominator is $10$ rather than $10-1$) The MLE for $\sigma_{22}$ is found similarly by using the second column. The MLE for $\sigma_{12}$ is
$$
\frac{1}{10}\sum_{i=1}^{10} (x_{1i}-\bar{x}_{1\bullet})(x_{2i}-\bar{x}_{2\bullet}).
$$
See this section of a Wikipedia article.
The article Estimation_of_covariance_matrices. |
Determine if Objects are moving towards each other | I suppose that "moving towards each other" means that the distance between the objects is decreasing.
The velocity of object 2 relative to object 1 is given by $v := v_2 - v_1$.
The displacement of object 2 from object 1 is given by $d:= p_2 - p_1$.
We may simply take the dot product $v \cdot d$. If the result is positive, then the objects are moving away from each other. If the result is negative, then the objects are moving towards each other. If the result is $0$, then the distance is (at that instance) not changing.
The objects will only collide (assuming that the velocities are constant over time) if $v$ is parallel to $d$, which is to say that $v$ is a multiple of $d$. Of course, we must additionally have that the objects are moving towards each other. |
Real Analysis, Folland Theorem 2.41/Exercise 53 The $n$ dimensional Lebesgue integral | Note that in the statement of the theorem, I renamed $\phi$ by $\psi$. It will help to highlight the relationship between the proof of this theorem and the proof of Theorem 2.26.
Theorem 2.41/Exercise 53 - If $f\in L^1(m)$ and $\epsilon > 0$, there is a simple function $\psi = \sum_{1}^{n}a_j\chi_{R_j}$ where each $R_j$ is a product of intervals, such that $\int |f - \psi| < \epsilon$, and there is a continuous function $g$ that vanishes outside a bounded set such that $\int |f - g| < \epsilon$.
Proof:
By theorem 2.10 we can find a sequence of simple functions $\{\phi_j\}$ with $\phi_j\rightarrow f$ pointwise and $|\phi_1|\leq |\phi_2|\leq \ldots \leq |f|$. Now, $|\phi_j - f|\rightarrow 0$ pointwise and $$|\phi_j - f|\leq |\phi_j| + |f| \leq 2|f|$$ Applying the Dominated Convergence Theorem, $$\lim_{j\rightarrow \infty}\int |\phi_j - f| = \int 0 = 0$$
So, we can find a simple function $\phi = \sum_{1}^{n}a_j\chi_{E_j}$ within $L^1$-distance of $\epsilon/2$ from $f$, that means
$$ \int |\phi - f|<\epsilon/2$$
We can assume, without loss of generality, that the $E_j$'s are disjoint and for all $j$, $a_j \neq 0$. So, since
$$\sum_{j=1}^n |a_j| \mu(E_j) = \int |\phi| \leq \int |f| + \int |\phi - f|< \int |f| + \epsilon/2 <+\infty $$
we have that all of the $E_j$'s have finite measure. So, by Theorem 2.40 item c, we have that, for each $E_j$, there is a finite union of rectangles $F_j$ whose sides are intervals such that $\mu(E_j \ \triangle \ F_j) < \frac{\epsilon}{2|a_j|n}$. Now,
\begin{align*}\int\left|\sum_{1}^{n}a_j\chi_{E_j} - \sum_{1}^{n}a_j \chi_{F_j}\right| &\leq \sum_{1}^{n}|a_j|\int |\chi_{E_j} - \chi_{F_j}|\\ &= \sum_{1}^{n}|a_j|\mu(E_j \ \triangle \ F_j)\\ &\leq \sum_{1}^{n}\frac{\epsilon}{2n}\\ &= \frac{\epsilon}{2}
\end{align*}
It follows that
$$\int_n\left|f - \sum_{1}^{n}a_j \chi_{F_j}\right| \leq \int |f- \phi| + \int\left|\phi - \sum_{1}^{n}a_j \chi_{F_j}\right|< \epsilon/2 + \epsilon/2 =\epsilon $$
Since, for each $j$, $F_j$ is a finite union of rectangles whose sides are intervals, we have that $$\sum_{1}^{n}a_j \chi_{F_j}=\sum_{1}^{m}c_k \chi_{R_k}$$ where, for each $k$, $R_k$ is a rectangle whose sides are intervals, $c_k \neq 0$. Take $\psi= \sum_{1}^{m}c_k \chi_{R_k}$.
So we proved that, for any $\epsilon >0$, there is a simple function $\psi = \sum_{1}^{m}c_k \chi_{R_k}$ such that, for each $k$, $R_k$ is a rectangle whose sides are intervals, $c_k \neq 0$ and
$$ \int |\psi - f|<\epsilon$$
For the last part, note that, from what we have just proved we can find a simple function $\psi = \sum_{1}^{m}c_k \chi_{R_k}$ such that, for each $k$, $R_k$ is a rectangle whose sides are intervals, $c_k \neq 0$ and
$$ \int |\psi - f|<\epsilon/2$$
For each $k$, let $\varphi_k $ be a continuous function such that
$$\left | \chi_{R_k}- \varphi_k \right |< \epsilon/(2|c_k|m)$$
(such function always exist)
Let $ g= \sum_{k=1}^m c_k \varphi_k $.
It is clear that $g$ is continuous and we have
\begin{align*}
\int |f-g| &\leq \int |f-\psi|+\int |\psi - g| < \\
& < \epsilon/2 + \int \left | \sum_{1}^{m}c_k \chi_{R_k}-\sum_{k=1}^m c_k \varphi_k \right | \leq \\
& \leq \epsilon/2 + \sum_{1}^{m}|c_k| \int \left | \chi_{R_k}- \varphi_k \right | \leq \\
& \leq \epsilon/2 + \sum_{1}^{m}|c_k| (\epsilon/(2|c_k|m))= \\
& = \epsilon/2 + \epsilon/2 = \epsilon
\end{align*}
Remark: Note that this proves is "parallel" to the proof of Theorem 2.26. See, for instance, my answer in
Real Analysis, Folland Theorem 2.26 Integration of Complex Functions |
Finding one sided limit | Suppose that $t\in\mathbb R$.
The function $f(t)=\sqrt{4-t^2}$ is defined if and only if $4-t^2\ge0\iff -2\le t\le2.$
By the way, $t\to 2^+$ means $t\gt 2$ during the approach. Hence, $f(t)$ is not defined during the approach. |
Finding Orthonormal Eigenvectors of a $4x4$ Variance-Covariance Matrix | For $\lambda=0$, we could also let $x_3=-1$ and $x_4=1$. Hence $x_2=2$ and $x_1=0$ so we get another eigenvector of $\frac{1}{\sqrt{6}}(0,2,-1,1)$
For $\lambda=\frac{1}{4}$, we could also let $x_3=1$ and $x_4=-1$. Hence $x_2=1$ and $x_1=0$ so we get another eigenvector of $\frac{1}{\sqrt{3}}(0,1,1,-1)$ |
Let $A$ be a principal ideal domain, and $a,b,d$ elements of $A$. Prove that $d$ is a gcd of $a$ and $b$ if and only if $aA+bA=dA$. | Use the hypothesis that $A$ is a PID:
Consider the ideal generated by $a$ and $b$, i.e. $aA+bA$. Now, since $A$ is a PID, this ideal is generated by one element, say $\delta$: yielding $aA+bA=\delta A$.
But now, by your first step, $\delta$ is a $\gcd$ of $a$ and $b$, hence it is associate to $d$ (i.e. $d=\delta u$ with a unit $u$), so $dA=\delta A = aA+bA$. -QED- |
oddness and evenness of sum of three square integers | A square is always congruent to $1$ or $0$ $\pmod{4}$. Hence if you assume that $x,y$ are odd they'll be congruent to $1 \pmod{4}$. Thus, if $z$ is odd we have $x^2+y^2+z^2 \equiv v^2 \equiv 2 \pmod{4}$ which is absurd.
One possible way of doing the problem is the following one :
Firstly it is easy to prove that if $x,y,z$ are even so is $v$. Now lets prove that if $v$ is even so are $x,y,z$.
Suppose that among t $x,y,z$ two of them have different parities. As we already showed that the case two odds/one even doesnt work lets consider the case where we have two even numbers and one odd. The sum of the square of two even numbers is congruent to $0 \pmod{4}$. Adding the square of the one odd number we have we find that $v^2 \equiv 1\pmod{4}$ which is absurd as we supposed $v$ even.
Hope this'll help you.
Hubert |
Some questions about the proof of Godel's first incompleteness theorem | The key point about Goedel's theorem isn't that there is some sentence which PA (say) can't prove, it's that there's some sentence in the language of PA which PA can't prove.
This is what Goedel number helps us do. The point is that we want to write "This sentence is unprovable," in the language of arithmetic. But arithmetic doesn't let us talk directly about sentences and proofs! So we need to somehow argue that there is a statement about arithmetic - something like $$\mbox{"$\exists x\forall y\exists z\forall w \exists u(3x+4y^2-7z+w^4-u=0)$"}$$ - which somehow "means" "I am unprovable".
This addresses your first question - why arithmetization is necessary. Your second question, how it's done, is harder to answer, and I suggest you look at a book that treats it in detail (say, Goedel without (too many) tears). Ultimately, though, the proof of Goedel's theorem hinges on two non-obvious properties:
Arithmetization works - this is ultimately a statement that the theory PA is strong enough.
PA is recursively axiomatizable - there is an "explicit" description of what the axioms of PA are. This is ultimately a statement that PA is not too complicated.
Both pieces are needed for Goedel's argument to work:
Overly simple theories, like the theory of dense linear orders without endpoints, can be complete in that they prove or disprove every sentence in their language. Such theories must be too simple to "describe arithmetic" in some sense.
Overly complicated theories can also be complete. For instance, let $T$ be the true theory of arithmetic - that is, the set of all true statements about the structure $(\mathbb{N}; +, \times, 0, 1, <)$. $T$ is complete - every sentence in its language is either true (in which case $T$ proves it) or false (in which case $T$ disproves it); this doesn't contradict Goedel since $T$ is impossible to "nicely describe".
So one consequence of Goedel's theorem is that PA, and theories like it, occupy a logical "sweet spot" of sorts. This isn't the main takeaway of course, but it's an interesting observation.
It's also worth noting that there are theories in other languages than that of arithmetic, to which Goedel's theorem also applies; for instance, the usual axioms of set theory are both "simple" enough and "strong" enough for the proof to go through. Basically, you don't really need arithmetization on the nose, you just need the notions of sentence, proof, etc. to be representable in your language, and for basic properties of them to be provable in your theory. |
Tensor product of vector spaces by the "noncommutative definition" | More generally, if $R$, $S$ and $T$ are rings and we have bimodules ${}_RM_S$ and ${}_SN_T$, then the tensor product
$$
M\otimes_S N
$$
is in a natural way an $R$-$T$-bimodule ${}_R(M\otimes_S N)_T$.
When $S$ is a commutative ring, a module $M_S$ can be considered as an $S$-$S$-bimodule with $axb=x(ab)=(ba)x$. With this convention, a balanced map is also a bilinear map, if you think to be “corestricted” to its image, which is in a natural way a module over the ring.
So $V\otimes_kV$ is the same thing for both approaches. |
Find the Möbius transformation mapping $(i, 0, \infty)$ to $(0, \infty, -i)$, in precisely that order | You should have $z-i$ on the numerator in order to send $i$ to $0$. Then multiply by $-i$ in order to send $\infty$ to $-i$. Then determine the constant on the denominator so that $0$ goes to infinity. You get
$$w=\frac{ -i (z-i) } {z}$$ |
Given a right triangle whose side lengths are all integer multiples of 8, how many units are in the smallest possible perimeter of such a triangle? | We have
$$(8a)^2+(8b)^2=(8c)^2$$
Dividing by 64:
$$a^2+b^2=c^2$$
As is well-known, the smallest possible integer values for these variables are $a=3,b=4,c=5$, so the smallest possible perimeter is
$$8(a+b+c)=8\cdot12=96$$ |
Double Integrals: How to choose appropriate limits of integration? | Notice that for the region of integration, $0\le y\le 1$ and $e^y\le x\le 1+e-y$, so you can find
$\;\;\;\displaystyle\int_0^1\int_{e^y}^{1+e-y}1\; dx\;dy$. |
Normal Vector Bundle of RPn | If $M$ is a submanifold of $\mathbb{R}^n$, the normal bundle of $M$ is the vector bundle $NM$ such that for every $x\in M$, $NM_x$ the fibre of $NM$ at $x$ is the subspace of $\mathbb{R}^n$ othogonal to $T_xM$.
If $M=S^n$, $NS_x=(x,tx),t\in\mathbb{R}$. You cannot define the normal bundle of $\mathbb{R}P^n$ since it is not embedded in $\mathbb{R}^n$, but $NS^n$ behaves well in respect to $(x,tx)\rightarrow (-x,-tx)$ as Hatcher mentioned and induces a bundle on $\mathbb{R}P^n$ which is trivial. Basically, you can lift the action of $\mathbb{Z}/2$ of on $S^n$ and the quotient $NP^n$ is a trivial bundle. Another way to see this, is t consider the map $q:\mathbb{R}P^n\times \mathbb{R}\rightarrow NP^n$ defined by $q([x],t)\rightarrow p(x,tx)$ where $p:S^n\rightarrow \mathbb{R}P^n$ is the canonical projection and $p(x)=[x]$. In fact Hatcher says that $p(x,t)=p(-x,t)$ thus $q$ is well defined. |
Finding common distance to three points on circles and one input range (cont.) | Formulation
I'd use the tangent half-angle formula to express $A$ in terms of a variable $a$ as
$$A=\left(c+L\frac{1-a^2}{1+a^2},d+L\frac{2a}{1+a^2}\right)$$
Do the same for the points on the small circles, $B$ and $-B$ described by parameter $u$:
$$B=\left(e+R\frac{1-u^2}{1+u^2},f+R\frac{2u}{1+u^2}\right)$$
Also you need the point under $A$ where the three segments of length $L$ meet:
$$P=\left(A_x,A_y-R-N\right)=\left(c+L\frac{1-a^2}{1+a^2},d+L\frac{2a}{1+a^2}-R-N\right)$$
Then $N$ is characterized by
$$\lVert P-B\rVert^2=\lVert P+B\rVert^2=N^2$$
These are rational equations, i.e. equations using rational functions due to the division from my tangent half-angle formulation. You could make them polynomial equations by multiplying with the common denominator. In my Sage code below, I express the equation as difference being zero, and do that transformation from rational to polynomial by taking the numerator of the difference.
Using a resultant of those polynomial equations, you can eliminate $u$ from them. The remaining equation will have a solution of multiplicity greater than one for those parameters $a$ where solutions appear or disappear. The discriminant will be zero whenever there exists a multiple solution of $N$. So we can say that whenever the number of solutions changes, the discriminant will be zero. The converse is not necessarily true.
You could find the roots of that discriminant, taken as a polynomial in $a$, to find the potentially relevant positions for $A$. Then you would compute the number of solutions between these positions in order to find the ranges where that count is different from zero.
Computations
It is fairly easy to express the computation in Sage like this:
PR.<a,c,d,e,f,L,R,u,N> = QQ[]
P = vector([c+L*(1-a^2)/(1+a^2), d+L*(2*a)/(1+a^2)-R-N])
B = vector([e+R*(1-u^2)/(1+u^2), f+R*(2*u)/(1+u^2)])
eq1 = ((P-B)*(P-B)-N^2).numerator().resultant(((P+B)*(P+B)-N^2).numerator(), u)
eq1.factor() # Look at factors to get rid of some noise.
assert eq1.mod(64*R^2*(a^2+1)) == 0
eq2 = eq1 // (64*R^2*(a^2+1)) # Remove spurious factor.
eq3 = eq2.discriminant(N) # SLOW!!!
eq4 = [i.polynomial(a) for i, _ in eq3.factor()]
However the discriminant computation will take excessive amounts of time and memory. It's taking so long that I decided to send this answer before it had finished for me. Later on I used a workaround to presumably make it a bit faster, or maybe I just waited a bit longer. The resulting polynomial could be split into four distinct factors. Three of them had exponent $2$ and a degree in $a$ of at most $4$, but the most interesting factor had exponent $1$ and degree $20$ in $a$.
Things would be a lot easier if you had concrete numbers, instead of doing all of this symbolically. So my recommendation would be getting a computer algebra system where you can enter all your known values as actual numbers, and then do the computation with only the bare minimum of actually unknown values as variables of the polynomials.
Example
To take a concrete example (based on a construction you shared before):
\begin{align*}
c &= -15 & d &= 14 & L &= 30 \\
e &= 14 & f &= 15 & R &= 12
\end{align*}
For this setup the discriminant had 14 zeros. But only 6 of them actually corresponded to a change in the number of solutions, and those 6 all came from the degree 20 factor of the discriminant:
\begin{align*}
a_1 &= -8.5615180578666 &
a_2 &= -1.1646835723394 \\
a_3 &= -0.8299517239535 &
a_4 &= -0.3442609311901 \\
a_5 &= +0.0167654732809 &
a_6 &= +1.3274725315537
\end{align*}
Each pair denoted a range $[a_{2k-1},a_{2k}]$ where there were real solutions of the original equation.
When I find the time, I'll create some illustrations of these.
Corner case
Note that the tangent half-angle formula is unable to represent a single point on the circle, namely for angle $\pi$. That value would correspond to $\lim_{a\to\infty}$. This would show as the leading coefficient of the discriminant being zero. What I mean is that you can work out what the degree of $a$ in the discriminant should be in a number of ways. The easiest is probably a computations using concrete input numbers in general positions. If for a specific input configuration the degree is less than that, you have a zero leading coefficient and know that $a\to\infty$ i.e. $A=(c-L,d)$ is a solution of the discriminant problem as well, and the number of solutions may change at that point. If you want to avoid all this trouble, just always consider that as a point where the number of solutions might change, since you need to check whether you have zero solutions in a given range anyway, so adding one more range shouldn't be a problem. |
Tower of dice - Abstracting a practical problem to a mathematical method | Your hypothesis is correct. A die has a total of $21$ eyes, $7$ on each pair of opposite sides, so it always has $14$ on the walls no matter how you place it. Since $14=3\cdot4+2$, the number of eyes on the walls is not an integer multiple of $4$ if you have an odd number of dice, so you can't have the same number of eyes on all $4$ walls in this case. |
Evaluating a line integral of a vector field numerically | Your work looks pretty good, but you can actually take it a step further since you know $r_i'(t)$. When you substitute in this information, each integral depends only on one component of $\vec V$, but not both. For instance
$$\int^{b_1}_{a_1} \vec V(\vec r_1(t)) \cdot r_1'(t) \; dt
=\int^{b_1}_{a_1} u(\vec r_1(t)) \; dt$$
The next task is to write a routine to implement the function $\vec V$, that is, you want a function from $\mathbb{R}^2\rightarrow\mathbb{R}^2$. Use something like interp2 (though this may not be the most efficient). Once you have this accomplished, simply evaluate the now 1-d integrals using the quad function, for instance. |
Find solutions of the given equation in the form of power series | Assume $y = \displaystyle \sum_{j = 0}^{\infty} a_j x^{s+j}$ with $a_0 \ne 0$.
We find $$\sum_{j = 0}^{\infty} a_j (s+j)(s+j-1)x^{s+j-2} + 2 \sum_{j = 0}^{\infty} a_j(s+j) x^{s+j} = 0$$
Substituting for lowest power of $x$ at $j=0$, $$a_0 s (s-1) = 0 \implies s= 0$$
For $j=1$, $$a_1 .s(s+1)+2a_1.(s+1) = 0 \implies a_1 = 0$$
For recurrence relation collect the coefficients of $x^{s+j}$,
$$a_{j+2}(s+j+2)(s+j+1) + 2a_{j}(s+j)=0$$
$$a_{j+2} = \frac{-2a_{j}(s+j)}{(s+j+2)(s+j+1)}$$
Since $a_1 = 0$, all the odd coefficients are zero. You will obtain two solutions; one for $s = 0$, other for $s = 1$.
Can you proceed from here? |
How to prove that polynomial variables are transcendental? | Maybe the following paragraph will clear things up.
Let $R$ be a ring. Consider the set $R[x]$ of $\mathbb{N}$-indexed sequences of elements of $R$ with finite support, i.e. if $(a_n) \in R[x]$ then $a_n = 0$ for $n$ big enough. Endow this set with pointwise addition, and the multiplication of $(a_n)$ and $(b_n)$ is given by $(c_n)$ where $c_n = \sum_{i+j=n} a_i b_j$. Then this is a ring, and $R$ embeds in $R[x]$ by sending $a \in R$ to $(a, 0, 0, \dots)$. It is now rather elementary to show that the element $x = (0,1,0,0,\dots) \in R[x]$ is transcendental over $R$.
tl;dr: $R[x]$ is not defined as being "generated by a transcendental element"... You can just define $R[x]$ in the usual fashion, and then you can prove that $x \in R[x]$ is transcendental. Not the other way around.
What the sentence you quoted actually means is the following: given any $R$-algebra $A$ and any element $a \in A$ transcendental over $R$, there exists a unique morphism of $R$-algebras $R[x] \to A$ such that $x \mapsto a$, and moreover this morphism is an embedding. So in a sense you take $R$, "add a transcendental element", and you get $R[x]$. But it's just a heuristic.
In other words, $R[x]$ is initial in the category of pointed $R$-algebras. This seems to be what you had in mind when you wrote your definition of "polynomial rings over $R$". But as usual, when you define something as an initial object in some category, you have to show that it exists... And there's no better way to show that something exists than to construct it. Defining something as the initial object of a category is not a definition until you can prove that the initial object exists. |
Alternative definition of derivative | The notion already exists and goes (usually) by the name of derivative in the sense of Whitney.
It is often applied for example on Cantor sets when simply we don't have sufficient data to define the usual derivative, but still we can discuss how a function is locally Hölder continuous or perhaps a bit more (that's where the derivative enters).
On the other hand, there is an important result of Whitney showing (more or less) that after all we get the same. For details see for example: Whitney extension theorem. |
How to simplify $42^{25\sqrt{25^{25}}}$? | Let us first simply this a bit:
$$42^{25 \sqrt{25^{25}}} = 42^{25 \cdot 5^{25}} = (21^{25})^{5^{25}} 2^{5^{27}}.$$
The number of digits is obtained by the logarithm, so we calculate
$$\log_{10} 42^{25 \sqrt{25^{25}}} = 5^{25} \log_{10} 21^{25} + 5^{27} \log_{10} 2.$$
Since $n = \lfloor \log_{10} 21^{25} \rfloor + 1$, I'd say that the answer is "none of the above".
The used identities can be found here. The check with WolframAlpha was not as simple, but here it is: the number of digits of $21^{25}$ is $34$, and the number of digits of $42^{1000}$ is $1624$. I think we can agree that $25 \sqrt{25^{25}} > 1000$, and $1624 > 850 = 25 \cdot 34 = 25n$ (the other offered answers are even smaller), which I believe confirms my answer. |
Probability of finding the answer with multiple cases | At the first step, let to compute the probability of publishing a book.
$$p= P(\text{publish a book}) = P(Good)\times P(Publish|Good) + P(Bad)\times P(Publish|Bad) = \frac{1}{2}\times \frac{2}{3} + \frac{1}{2}\times \frac{1}{4}$$
We know publishing a book is independent of publishing another book. Thus, we have
$$P(\text{publish at least one book}) = 1- P(\text{does not publish any book}) = 1- (1-p)^{2} = \frac{407}{576}$$ |
Is this integral solvable? (Physics) | You probably mean motion in a gravity field,
$$
\frac{d^2x}{dt^2}=-\frac{GM}{|x|^2}\hat x=-\frac{GMx}{|x|^3}.
$$
There are two ways to integrate once: Multiply with $2\dot x$ and get
$$
\frac{d}{dt}|\dot x|^2=\frac{GM}{|x|}+C
$$
and the cross product with $x$ to get
$$
\frac{d}{dt}(\dot x\times x)=\ddot x\times x=0\implies \dot x\times x = D.
$$
These lead to the Kepler laws of planetary motion. |
Eigenvalues and Eigenfunctions of an Operator | This is a finite dimensional question, so you may write the operator in matrix form (using the basis $v_1,v_2$) as
$$
\begin{bmatrix}a&c\\b&d \end{bmatrix}
$$
then solve for the roots of characteristic polynomial to find eigenvalues $\lambda$ (there may be none, one, or two depending on what field you are over!) solving
$$
\lambda^2-(a+d)\lambda+\det(T)=0
$$
If you get an eigenvalue $\lambda$, you find eigenvectors by examining
$$
\ker\left(\begin{bmatrix}a-\lambda&c\\b&d-\lambda \end{bmatrix}\right)
$$ |
Corresponding matrix for a symmetric non-degenerate bilinearform. | First of all convince yourself that base change of a symmetric bilinear form corresponds to replacing the representing matrix $S$ by the matrix $P^TSP$, where $P \in GL_{d}(k)$ is the base change matrix between the two bases. The matrices $S$ and $P^TSP$ are called congruent to each other. Check that congruence defines an equivalence relation.
Over any field in which $2 \neq 0$, any symmetric matrix is congruent to a diagonal matrix. See Will Jagy's exposition here, with many examples and links with more examples.
For example, check that your matrix $\pmatrix{0&I_n\\I_n&0}$ is congruent to $\pmatrix{2I_n&0\\0&-\frac12 I_n}$ via $P = \pmatrix{I_n&-\frac12I_n\\I_n&\frac12I_n}$.
Now a diagonal matrix $\mathrm{diag}(d_1, ..., d_r)$ is congruent to $\mathrm{diag}(c_1^2d_1, ..., c_r^2d_r)$ via $P:=\mathrm{diag}(c_1, ..., c_r)$. In other words, we can change each of the diagonal entries by an appropriate square.
For example, over the field $\mathbb R$, the matrix $\pmatrix{2I_n&0\\0&-\frac12 I_n}$ is further congruent to $\pmatrix{I_n&0\\0& \color{red}{-} I_n}$ (and hence so is the original matrix $\pmatrix{0&I_n\\I_n&0}$: in the language of Sylvester's theorem, the form we are looking at here when considered over a real vector space has signature $(n,n)$).
But of course over an algebraically closed field $k$ (or any field in which all elements are squares), we can just make all diagonal entries $=1$ or $=0$. And zeros occur if and only if the form is degenerate. In particular, up to base change there is only one non-degenerate symmetric bilinear forms over an algebraically closed field $k$ of characteristic $\neq 2$. In particular, the one you have there is it, since it is congruent to $\pmatrix{I_n&0\\0&I_n}$ (note it is congruent to the identity matrix over all fields in which $2$ and $-\frac12$ are squares). |
Are two proofs better than one? | Sometimes that happens. For instance, it may be the the case that one proof is a smart but hard to find proof, whereas the other one, although longer and not particulary bright, is the natural approach to the problem. |
Does probabilistic graphical models include kalman filters? | indeed they do and it is quite straightforward to think about them in terms of graphical models (at least for some of us) if you don't like to remember specific formula for every model. You create nodes for velocity and position (represented as Gaussians with mean and std) and attach them to your observed node and repeat that for all the timesteps (with the difference that first node for position and vleocity is independent of the history and the future one will depend, which makes sense since we assume tracking object is not performing any jumps). You can find (simplified) representation of Kalman filters as graphical models in Chritopher Bishop's 3rd lecture from MLSS 2013: https://www.youtube.com/watch?v=QJSEQeH40hM. |
3 balls are distributed to 3 boxes at random. Number of way in which we set at most 1 box empty is: | Assuming the balls and boxes are distinguishable, you should have multiplied ${3 \brace 2}$ by $3!$ rather than $2!$ in the case where one box is left empty, where ${n \brace k} = S(n, k)$.
Let's look at the case where exactly one box is left empty. Two balls must be placed in one box, and the other ball must be placed in another. There are $\binom{3}{2}$ ways to choose which two balls are placed together in one box. If the boxes are indistinguishable, we place these two balls in one box, and place the other ball in another box. Thus, if the boxes were indistinguishable, the number of ways we can distribute three distinct balls to three indistinguishable boxes so that one box is left empty is
$${3 \brace 2} = \binom{3}{2} = 3$$
If the boxes are actually distinguishable, it matters which box receives two balls, which box receives one ball, and which box receives no balls. There are $3!$ such assignments. Thus, the number of ways to distribute three distinct balls to three distinct boxes so that exactly one box is left empty is
$${3 \brace 2}3! = 18$$
Since there are $3!$ ways to distribute three distinct balls to three distinct boxes so that no box is left empty, the number of ways to distribute three distinct balls to three distinct boxes so that at most one box is left empty is
$$3! + {3 \brace 2}3! = 24$$
An alternate approach
Assume the boxes are distinguishable from the outset.
No box is left empty: There are $3! = 6$ ways to assign each of the three distinct balls to a different box.
Exactly one box is left empty: If exactly one box is empty, there are three ways to decide which box will receive two balls and two ways to assign a second box to receive the remaining ball. There are $\binom{3}{2}$ ways to decide which two balls are placed in the box which will receive two balls and one way to place the remaining ball in the box which will receive one ball. Hence, there are
$$3 \cdot 2 \cdot \binom{3}{2} = 3!\binom{3}{2} = 18$$
ways to distribute three distinct balls to three distinct boxes so that exactly one box is left empty.
Thus, there are indeed
$$3! + 3!\binom{3}{2} = 6 + 18 = 24$$
ways to distribute three distinct balls to three distinct boxes so that at most one box is left empty. |
Numbers defined with matrices | This is to some extent a question of representation theory. So we do use these things.
For example, suppose you want to extend the field of rational numbers to include some weird number $\xi$ which satisfies $\xi^2 - N = 0$. You can represent multiplication by a number $a+b\xi$ with the 2x2 matrix $\pmatrix{a & Nb \\ b & a}$. Then yours is just a special case of $N = -1$, which (if the underlying elements $a$ and $b$ are in $\mathbb{R}$) is one way to represent complex numbers. If $N = 0$ then we have the dual numbers, and if $N=1$ then we have the split-complex numbers.
Note that extensions like this can cause problems and we may lose field properties, for instance there is no way to divide a real number by a pure dual number since eliminating the dual from the denominator constitutes division by zero. So always check that the basic rules of arithmetic are preserved, or if we need additional constraints. Just because we lose field properties doesn't mean the algebraic structure isn't interesting or useful.
This kind of extension can continue. If your underlying field is $\mathbb{Q}$ and you extend it to include $\xi_N$ (as above) and then want to extend it again to include another $\xi_M$ then you have a 4x4 matrix representation. |
NP-completeness and NP problems | A problem $X$ is "NP-complete" if for any problem $Y$ in NP, there is a polynomial-time reduction from $Y$ to $X$. So if there is a polynomial-time algorithm for some NP-complete decision problem $X$, then there is a related algorithm for any problem $Y$ in NP, namely, reduce the instance of $Y$ to an instance of $X$ and use the polynomial-time algorithm for $X$. |
A combinatorial proof of the identity $\sum\limits_{k=0}^nk^2 {n \choose k}^2 = n^2 {2n - 2 \choose n- 1}$ | The left hand side counts, for each $k$, the number of ways you can choose $k$ boys and $n-k$ girls to form a team. Among these $k$ boys there are $k$ ways to choose the boy leader. There are $k$ ways to choose a girl supervisor from the $k$ girls that are not on the team. |
Finding Galois group $Gal(x^4 + 4 / Q)$ | Well with quartics of certain forms you can complete the square rather easily to find a factorisation into quadratics - sometimes in an extension field, but here over $\mathbb Q$:
$x^4+4=x^4+4x^2+4-4x^2=(x^2+2)^2-4x^2=(x^2+2x+2)(x^2-2x+2)$
For an example which comes up regularly $x^4+1=(x^2+1)^2-2x^2=(x^2+\sqrt 2 x+1)(x^2-\sqrt 2 x +1)$. And you can do similar things in other cases when all the powers are even.
In your case the polynomial you are given can be factored over $\mathbb Q$. The negatives of the roots of the first factor are the roots of the second factor. The first factor splits in a quadratic extension, and the second factor splits too.
Note that if you have a polynomial $p(x^2)$ you know, in fact, that if $\alpha$ is a root, then so is $-\alpha$, so the roots pair up, and you get various possibilities of factorising $p(x^2)=q(x)q(-x)$ corresponding to different ways of splitting the roots - allocating $\alpha$ to one factor and $-\alpha$ to the other. That means that you only have to split $q(x)$ to split $p(x^2)$. If there are $n$ distinct pairs of roots we can fix $\alpha$ and find that there are $2^{n-1}$ ways of choosing the signs for the others. In the case in the question there are four roots in two pairs, and two factorisations of the kind under consideration.
As an illustration we have $x^4+4=(x^2+2ix-2)(x^2-2ix-2)$ from the alternative pairing of roots in the case in your question. This clearly belongs in $\mathbb Q(i)$ and solving the quadratics shows that they split in $\mathbb Q(i)$.
We always expect three ways of splitting a quartic into quadratics. The third way here is to pair each root with its negative obtaining $x^4+4=(x^2+2i)(x^2-2i)$ |
A Question on Integers and Fractions | Suppose the equation $\left(\frac ab\right)^2+\left(\frac cd\right)^2 = e^2$ holds, where the fractions are in their simplest form, i.e. $\gcd(a,b) = \gcd(c,d) = 1$. Then $$\frac {a^2}{b^2} = \frac {e^2d^2 - c^2}{d^2}$$
Both sides are in their simplest form since
$$\gcd(a^2,b^2) = 1 = \gcd(c^2,d^2) = \gcd(e^2d^2 - c^2, d^2)$$
so $a^2 = e^2d^2-c^2, b^2 = d^2$.
Given $b,d > 0$, $b=d$. Therefore no solutions exists, and this fact is unrelated to Pythagorean triples. |
Handshake/pigeonhole principle problem? | For reference, the two complicated rules are
For every pair of people who shook hands, the is exactly $1$ person who shook hands with both of them.
For every pair of people who did no shake hands, there are exactly $6$ people who shook hands with both of them.
Let $n$ be the number of people. There are $20n/2=10n$ handshakes.
Call a triple of people $\{a,b,c\}$ a "two-chain" if exactly two pairs of them shook hands. Let's count the number of two-chains in two different ways:
Let $N$ be the number of pairs of people who did not shake hands. Per rule $(2)$, there are $6N$ two-chains.
Let $H$ be the number of handshakes. For each handshake $a\leftrightarrow b$,
by rule $(1)$, there is another person $c$ which shook both their hands. Let $A$ be the set of 18 other people $a$ shook hands with, besides $b$ and $c$. Same for $B$. Per rule $(1)$ again, no one in $A$ shook hands with $b$ (because only $c$ shook hands with both $a$ and $b$), so for every person $a'\in A$, we have $\{a,a',b\}$ is a two-chain. Similarly for $\{a,b,b'\}$ for $b'\in B$. This leads to $18+18=36$ two-chains per handshake. However, this double-counts the number of two chains since each two-chain has two handshakes, so the number is actually $36H/2=18H$.
This shows that $6N=18H$, or $N=3H$.
But we also must have $N+H=\binom{n}2$, the total number of pairs of people. It follows that $$4H=\binom{n}2.$$ But we already had that $H=10n$. Substituting this into the above, you get $n=81$. |
Identifying the tangent space with $R^n$ | Yes, there is nothing deep here; the vector space $T_p M$ has dimension $n$, thus you can identify it with $\mathbb R^n$. Concretely, if you have a coordinate system $x_1, x_2, \ldots, x_n$, then you have a basis of $T_p M$ given by
$$\tag{1}
\frac{\partial}{\partial x_1}, \ldots, \frac{\partial }{\partial x_n}; $$
so you can identify the vector
$$
v_1\frac{\partial}{\partial x_1}+\ldots + v_n\frac{\partial}{\partial x_n}
$$
with the $n$-uple
$$
(v_1, v_2, \ldots, v_n)\in \mathbb R^n.$$
Of course, if you change coordinates, the identification will change accordingly.
Steffen remarks in comments that this is only a algebraic isomorphism and that it needs not preserve the scalar product. This means that it needs NOT be the case that
$$
\langle v, w\rangle_{T_p M}=\sum_{j=1}^n v_jw_j.$$
For this to be true, the basis of $T_p M$ should be orthonormal, and there is no guarantee that (1) is. However, as in any scalar product space, given an arbitrary basis we can always construct an orthonormal one via the Gram-Schmidt algorithm. (This orthonormal basis might not come from a coordinate system, though). |
Show that $\overline{\int_a^b} f+g \le \overline{\int_a^b} f+\overline{\int_a^b}g$. | To prove by contradiction, assume that
$$\overline{\int_a^b} (f + g) > \overline{\int_a^b} f + \overline{\int_a^b} g.$$
Then there exists a partition $P$ such that
$$\overline{\int_a^b} (f+g) - \overline{\int_a^b} g > U(P,f) \geqslant \overline{\int_a^b} f.$$
Since,
$$\overline{\int_a^b} (f+g) - U(P,f) > \overline{\int_a^b} g,$$
there exists a partition $P’$ such that
$$\overline{\int_a^b} (f+g) - U(P,f) > U(P’,g) \geqslant \overline{\int_a^b} g,$$
and
$$\overline{\int_a^b}(f+g) > U(P,f) + U(P’,g) .$$
Take a common refinement of the partitions $Q = P \cup P'$. Since upper sums decrease as partitions are refined, we have,
$$\tag{*}U(Q,f+g) \geqslant \overline{\int_a^b} (f+g) > U(P,f) + U(P’,g) \geqslant U(Q,f) + U(Q,g).$$
However, $\sup [f(x) + g(x)] \leqslant \sup f(x) + \sup g(x)$ and it follows that
$$U(Q,f+g) \leqslant U(Q,f) + U(Q,g),$$
which contradicts (*). |
Given a definition of the natural numbers $N$ define multiplication | In fact, in ZFC you can define the cartesian product of two sets (using e.g. Kuratowsky's definition of pair), that is
$$A\times B:=\left\{x\in P\left(P\left(\bigcup\{A,B\}\right)\right)\ \Big|\ \exists a\exists b\colon a\in A\land b\in B\land x=\left\{\{a\},\{a,b\}\right\}\right\} $$
then for $n, m\in\mathbb N$ define $n\cdot m$ as the unique element of $\mathbb N$ that can be bijected with $n\times m$.
As you note from even writingdown the definition, a lot of other axioms are involved (powerset, union, pairing, comprehension). Also, have fun showing existence and uniqueness and the arithmetic properties (not to mention defining the concepts of function and bijection first).
Honestly, recursion is the method of choice for definitions for the set $\mathbb N$ or the class of ordinals. Ordinals are so rich and powerful even in the context of weirdest set theories (e.g. they are always well-founded even if a theory allows non-well-founded sets), it would be a shame to define an operation on them with heacvy machinery instead of recursion. |
How to approximate $x^y$ using a quadratic function | If there is an inbuilt exponential function, then you can use the following equation to approximate it in the given intervals:
$$x^y \approx \exp \left(y\frac{x-1}{6}\left(1+\frac{8}{1+x}+\frac{1}{x}\right)\right)$$
The inner term is an approximation to ln(x) using Simpson's Rule, which is quite accurate. You can use a lookup table or a taylor series or some approximation if the inbuilt exp is not fast enough.
The accuracy is quite good, see this graph: https://www.desmos.com/calculator/75cebv9qks .
The green and blue lines coincide so nicely that they are not even visible seperately.
Near 0.16, its not that accurate and you may use 2 or 3 terms of the Simpson's Rule. |
Quick way to iterate multiples of a prime N that are not multiples of primes X, Y, Z, ...? | Up to $kN$ (inclusive) there are $k$ positive integer multiples of $N$, of which $\lfloor k/X \rfloor$ are multiples of $X$, etc. By inclusion-exclusion, the number that escape a "blacklist" of $3$ primes is
$$ k - \left\lfloor \frac{k}{X} \right\rfloor - \left\lfloor \frac{k}{Y} \right\rfloor - \left\lfloor \frac{k}{Z}
\right\rfloor + \left\lfloor \frac{k}{XY} \right\rfloor + \left\lfloor \frac{k}{XZ} \right\rfloor + \left\lfloor \frac{k}{YZ}\right\rfloor - \left\lfloor \frac{k}{XYZ}\right\rfloor $$
Approximate it without the $\lfloor \cdot \rfloor$'s, and you're off by at most $4$, so then adjust...
EDIT: For example, suppose you want the $35$'th positive integer multiple of $N$ (any prime other than $5$, $7$ or $11$) that is not a multiple of $X=5$, $Y=7$ or $Z=11$. Thus you want $k$ so that
$$f(k) = k - \left\lfloor \frac{k}{X} \right\rfloor - \left\lfloor \frac{k}{Y} \right\rfloor - \left\lfloor \frac{k}{Z}
\right\rfloor + \left\lfloor \frac{k}{XY} \right\rfloor + \left\lfloor \frac{k}{XZ} \right\rfloor + \left\lfloor \frac{k}{YZ}\right\rfloor - \left\lfloor \frac{k}{XYZ}\right\rfloor = 35$$
Now
$$\eqalign{k &- \frac{k}{X} - \frac{k}{Y} - \frac{k}{Z}
+ \frac{k}{XY} + \frac{k}{XZ} + \frac{k}{YZ} - \frac{k}{XYZ} \cr
&= k \left(1-\frac{1}{X}\right)\left(1 -\frac{1}{Y}\right) \left( 1 - \frac{1}{Z}\right) = \frac{48 k}{77}}$$
which would be $35$ for $k = 56.14\ldots$. Now $f(56) = 34$ so we try $f(57)$ and find that this is $35$. Thus $57N$ is the multiple of $N$ we are looking for. |
Identifying compositions of reflections, and rotations in a hexagon | Denote by $O$ the middle of the segment $[A,D]$. You can check that all the triangles $OAF, OBA, OCB, ODC, OED, OFE$ are equilateral triangles. I shall denote by $(AD)$ the straight line through $A$ and $D$, etc. It is obvious that $(AD)=(DA)$. The notation $\mathrm{Id}$ stands for the identity map.
i) In order to identify $R_{D,120}\circ R_{A,60}$ we can make the following decompositions:
$$
R_{D,120}=\rho_{\Delta_1}\circ \rho_{DA},\quad R_{A,60}=\rho_{AD}\circ\rho_{\Delta_2},
$$
with
$$
\Delta_1\cap(DA)=\{D\},\, \angle(DA,\Delta_1)=60,\, \Delta_2\cap(AD)=\{A\},\, \angle(AD,\Delta_2)=30.
$$
Since $\angle(\overrightarrow{DA},\overrightarrow{DE})=60$, we deduce that $\Delta_1=(DE)$. Similarly we have $\Delta_2=(AE)$ because $\angle(\overrightarrow{AE},\overrightarrow{AD})=30$.
Hence
$$
R_{D,120}\circ R_{A,60}=\left(\rho_{DE}\circ \rho_{DA}\right)\circ\left(\rho_{AD}\circ\rho_{AE}\right)=\rho_{DE}\circ\underbrace{\left(\rho_{DA}\circ\rho_{AD}\right)}_{\mathrm{Id}}\circ\rho_{AE}=\rho_{DE}\circ\rho_{AE}=\rho_{DE}\circ\rho_{AE}.
$$
Now notice that $(DE)\cap(AE)=\{\color{red}{E}\}$ and
$$
\angle(\overrightarrow{EA},\overrightarrow{ED})=\angle(\overrightarrow{EA},\overrightarrow{EO})+\angle(\overrightarrow{EO},\overrightarrow{ED})=30+60=90=\frac{\color{red}{180}}{2},
$$
and therefore
$$
R_{D,120}\circ R_{A,60}=R_{\color{red}{E},\color{red}{180}}.
$$
ii) In order to identify $R_{F,180}\circ\rho_{\color{blue}{ED}}\circ R_{\color{blue}{D},120}$, we first make the following decomposition:
$$
R_{D,120}=\rho_{\color{blue}{DE}}\circ\rho_{\Delta_3},
$$
with
$$
\Delta_3\cap(DE)=\{\color{blue}{D}\},\, \angle(\Delta_3,DE)=60.
$$
Therefore $\Delta_3=(DA)$, because $OED$ is an equilateral triangle. It follows that
\begin{eqnarray}
R_{F,180}\circ\rho_{ED}\circ R_{D,120}&=&R_{F,180}\circ\left(\rho_{\color{blue}{ED}}\circ R_{\color{blue}{D},120}\right)=R_{F,180}\circ\left[\rho_{\color{blue}{ED}}\circ\left(\rho_{\color{blue}{DE}}\circ\rho_{\color{green}{DA}}\right) \right]\\
&=&R_{F,180}\circ\left[\underbrace{\left(\rho_{\color{blue}{ED}}\circ\rho_{\color{blue}{DE}}\right)}_{\mathrm{Id}}\circ\rho_{\color{green}{DA}}\right]=R_{F,180}\circ\rho_{\color{green}{DA}}.
\end{eqnarray}
Since $\overrightarrow{FB}\perp\overrightarrow{FE}$, i.e. $\angle(\overrightarrow{FB},\overrightarrow{FE})=90$ we can write:
$$
R_{F,180}=\rho_{FB}\circ\rho_{FE}.
$$
Thus
$$
R_{F,180}\circ\rho_{ED}\circ R_{D,120}=\left(\rho_{FB}\circ\rho_{FE}\right)\circ\rho_{DA}=\rho_{FB}\circ\underbrace{\left(\rho_{FE}\circ\rho_{DA}\right)}_{T_{\overrightarrow{BF}}}=\rho_{FB}\circ T_{\overrightarrow{BF}},
$$
i.e.
$$
R_{F,180}\circ\rho_{ED}\circ R_{D,120}=\rho_{BF}\circ T_{\overrightarrow{BF}}=T_{\overrightarrow{BF}}\circ\rho_{BF}
$$
is a glide reflection. |
Is the following information sufficient to guarantee a global maximum at the corner of some interval? | Assume $$h_a''(x^\ast) < 0 \quad\forall\ 0<a<\bar{a}$$
Then for every $a \in (0, \bar{a})$ there exists $\epsilon > 0$ such that
$$h_a''(x) < 0 \quad\forall\ x \in (x^\ast - \epsilon, x^\ast]$$
Now the problem is, that $\epsilon$ depends on a and $\liminf_{a\to 0} \epsilon(a)$ might be $0$. This fact could be exploited to generate a counter-example. |
Advice for how to learn more advanced math for audio signal processing? | This is probably heretical for math.SE, but you don't need to understand that equation. Just skim over it. You aren't going to use it for anything anyway.
Signal processing isn't mathematically rigorous (see the intro of Dirac delta "function", for instance). You don't actually work out integrals to find Fourier transforms. Instead, you memorize the most common Fourier transform pairs, and learn how mathematical operations in the time domain translate to the frequency domain (multiplication ⇔ convolution, for instance), so you can represent complicated signals as a combination of simple signals that you can work with easily.
Engineering is all about applying mathematics to build practical things, and taking lots of shortcuts and simplifications in the process. We transform to the Laplace domain and use phasors to avoid doing differential equations, converting them into polynomials and algebra. We memorize tables of common Fourier transforms to avoid doing the integrals, etc.
Fourier transform pairs:
For instance, say you have a recording of a tuning fork at 440 Hz (a sine wave), and you want to send it over the radio at 1 MHz. To do this, you multiply the 440 Hz sine wave with another sine wave at 1 MHz. This is amplitude modulation.
$x(t) = \cos(2 \pi 440 t) \cdot \cos(2 \pi 1000000 t)$
You know the Fourier transform of each sinusoid is a Dirac spike (as in the above graphic), and you know that multiplication in the time domain is equivalent to convolution in the frequency domain, so you can convolve the spectra of the two sine waves to get the spectrum of the result. Once you learn convolution, you'll know that this is just two spikes at the sum and difference frequencies: 1000000-440 and 1000000+440. You don't actually go through the trouble of solving the integral
$X(\Omega) = \int_{-\infty}^\infty \cos(2 \pi 440 t) \cdot \cos(2 \pi 1000000 t)e^{-i\Omega t} dt$
Solving this is not trivial, but applying transform tables is. It's more important to see in your head what's happening.
To demodulate at the other end, you multiply by 1 MHz again, producing frequency components at the sum and difference frequencies again, which are now 440 Hz, 2000440 Hz, and 1999560 Hz. The latter two can be thrown away by filtering, which just means multiplying by 0 in the frequency domain using a rectangle function, and you're left with the original recording. (And again, this is not mathematically rigorous; real filters are not rectangular, and calculating real filters' actual effects mathematically can be very difficult.)
For the stuff you want to know about audio signal processing, this is sufficient. When you get into more advanced stuff and need to know the details, you can go back and learn it in more depth.
The relationship of formal mathematics to the real world is ambiguous. Apparently, in the early history of mathematics the mathematical abstractions of integers, fractions, points, lines, and planes were fairly directly based on experience in the physical world. However, much of modern mathematics seems to have its sources more in the internal needs of mathematics and in esthetics, rather than in the needs of the physical world. Since we are interested mainly in using mathematics, we are obliged in our turn to be ambiguous with respect to mathematical rigor. Those who believe that mathematical rigor justifies the use of mathematics in applications are referred to Lighthill and Papoulis for rigor; those who believe that it is the usefulness in practice that justifies the mathematics are referred to the rest of this book. (Hamming, Digital Filters, 1998 Dover edition, page 72.) |
Ordinals which satisfy $\beta \cdot \alpha=\alpha$ | The ordinals of interest are multiples of $\beta^\omega$ i.e. $\alpha=\beta^\omega\cdot\gamma$ for some ordinal $\gamma$. It is straightforward to see that these ordinals satisfy the equation. To prove these are the only ones, one may simply see that $\beta\cdot(\beta^\omega\cdot\gamma+\delta)=\beta^\omega\cdot\gamma+\beta\cdot\delta>\beta^\omega\cdot\gamma+\delta$ whenever $\beta>1$ and $0<\delta<\beta^\omega$. |
Using the Mean Value Theorem to show Continuity | We need to show: $\displaystyle \lim_{x \to 0^{+}} f(x) = f(0) = \displaystyle \lim_{x \to 0^{-}} f(x)$.
For the left equation, we have: for $x > 0$, $f(x) - f(0) = \dfrac{e^x-1}{x} - 1 = \dfrac{e^x-1-x}{x} = e^{\xi} - 1$, $\xi \in (0,x)$. Thus:
$\displaystyle \lim_{x \to 0^{+}} f(x) = \displaystyle \lim_{\xi \to 0^{+}} e^{\xi} = 1 = f(0)$.
Similarly, $\displaystyle \lim_{x \to 0^{-}} f(x) = \displaystyle \lim_{\xi \to 0^{-}} e^{\xi} = 1 = f(0)$.
we conclude that $f$ is continuous at $x = 0$. |
MST or not without children ? | It is necessary that $v$ have degree more than $1$. Kruskal is a good start. I would then leverage the cycle property and cut property here. Add in $v$'s lowest-weight unused edge and remove the heaviest edge already in the tree on the cycle created. |
Matrix representation of linear map in this basis | To summarize the (excellent!) discussion in the comments, you know so far that the matrix representation of $A$ with respect to the given ordered basis takes the form
$$
[A] = (e_2 \mid e_3 \mid \cdots \mid e_n \mid v )
$$
where $e_i$ is the column vector with $1$ in its $i$-th entry and $0$ elsewhere and $v$ is some yet-to-be-determined vector. In particular, we know that $v = (\alpha_0,\alpha_1,\ldots,\alpha_{n-1})^T$ where $\alpha_i \in F$ ($0\leq i \leq n-1$) satisfy
$$
A^n(b) = \alpha_0 b + \alpha_1 A(b) + \ldots + \alpha_{n-1}A^{n-1}(b).
$$
Hence (as the wonderfully-named @Omnomnomnom alludes to in his comment) we can rearrange this to be of the form
$$
A^n(b) - \alpha_{n-1}A^{n-1}(b) - \ldots - \alpha_0 b = 0
$$
Moreover, if $0 \leq k \leq n-1$ we can compute that
\begin{align*}
A^n[A^k(b)] - &\alpha_{n-1}A^n-1[A^k(b)] - \ldots - \alpha_0 A^k(b)\\
&= A^k[A^n(b) - \alpha_{n-1}A^{n-1}(b) - \ldots - \alpha_0 b] \\
&= A^k(0) \\
&= 0,
\end{align*}
and so we have $p(A) = 0$ for $p(x) = x^n - \alpha_{n-1}x^{n-1} - \ldots - \alpha_0$ (since we have just shown that $p(A)(w) = 0$ for all $w$ in a basis of $U$). Furthermore, if $q$ is a polynomial of degree less than $p$, then $p(A) \neq 0$ (why?). Hence you can conclude that $p(x)$ is the minimal polynomial of $A$. This gives you a way to describe the final column vector $v$ in $[A]$ more explicitly. |
For $1\leq j\leq i\leq n$, does $\sum_{k=j}^i (-1)^{i-k} \binom{n-k}{i-k} \binom{n-j}{k-j}$ equal $\delta_{i,j}$? | We obtain
\begin{align*}
\sum_{k=j}^i&(-1)^{i-k}\binom{n-k}{i-k}\binom{n-j}{k-j}\\
&=\sum_{k=j}^i(-1)^{i-k}\frac{(n-k)!}{(i-k)!(n-i)!}
\cdot\frac{(n-j)!}{(k-j)!(n-k)!}\tag{1}\\
&=\sum_{k=j}^i(-1)^{i-k}\frac{(i-j)!}{(i-k)!(k-j)!}
\cdot\frac{(n-j)!}{(n-i)!(i-j)!}\tag{2}\\
&=\binom{n-j}{n-i}\sum_{k=j}^i(-1)^{i-k}\binom{i-j}{k-j}\tag{3}\\
&=\binom{n-j}{n-i}\sum_{k=0}^{i-j}(-1)^{i-j+k}\binom{i-j}{k}\tag{4}\\
&=\binom{n-j}{n-i}(-1)^{i-j}(1-1)^{i-j}\tag{5}\\
&=\delta_{i,j}
\end{align*}
Comment:
In (1) we use $\binom{p}{q}=\frac{p!}{q!(p-q)!}$.
In (2) we cancel $(n-k)!$ expand with $(i-j)!$ and do some rearrangements.
In (3) we can factor out $\binom{n-j}{n-i}$ which is independent from the index variable $k$.
In (4) we shift the index $k$ to start the series from $k=0$.
In (5) we use the binomial theorem with $(1-1)^{i-j}$.
Here is a different technique based upon generating functions. It is convenient to use the coefficient of operator $[z^k]$ to denote the coefficient of $z^k$.
This way we can write e.g.
\begin{align*}
[z^k](1+z)^n=\binom{n}{k}
\end{align*}
We obtain
\begin{align*}
\sum_{k=j}^i&(-1)^{i-k}\binom{n-k}{i-k}\binom{n-j}{k-j}\\
&=\sum_{k= j}^\infty(-1)^{i-k}[z^{i-k}](1+z)^{n-k}[u^{k-j}](1+u)^{n-j}\tag{6}\\
&=(-1)^i[z^i](1+z)^n\sum_{k= j}^\infty\left(-\frac{z}{1+z}\right)^k
[u^k]u^j(1+u)^{n-j}\tag{7}\\
&=(-1)^i[z^i](1+z)^n\sum_{k= 0}^\infty\left(-\frac{z}{1+z}\right)^{k+j}
[u^k](1+u)^{n-j}\tag{8}\\
&=(-1)^i[z^{i}](1+z)^n\left(-\frac{z}{1+z}\right)^{j}\left(1-\frac{z}{1+z}\right)^{n-j}\tag{9}\\
&=(-1)^{i-j}[z^{i-j}]1\tag{10}\\
&=\delta_{i,j}
\end{align*}
Comment:
In (6) we apply the coefficient of operator twice and we set the upper limit of the series to $\infty$ without changing anything since we are adding zeros only.
In (7) we use the linearity of the coefficient of operator, do some rearrangements and apply the rule $$[z^{i-k}]A(z)=[z^i]z^kA(z)$$
In (8) we shift the index $j$ to start the series from $j=0$.
In (9) we factor out $\left(-\frac{z}{1+z}\right)^j$ and apply the substitution rule of the coefficient of operator with $u:=-\frac{z}{1+z}$
\begin{align*}
A(z)=\sum_{k=0}^\infty a_kz^k=\sum_{k=0}^\infty z^k[u^k]A(u)
\end{align*}
In (10) we can make essential simplifications. |
The completness of ring and its power series ring | Here is a short calculation (whose steps can be justified by the reader):
$\lim_n R[[x]]/(I,x)^n = \lim_n R[[x]]/(I^n,x^n) = \lim_m \lim_n R[[x]]/(I^n,x^m)$
$= \lim_m \lim_n R/I^n[[x]]/(x^m) =\lim_m R[[x]]/(x^m) = R[[x]].$ |
Axis angle and length of ellipse | You can write the coordinates at time $t$
\begin{eqnarray}
\left( \begin{array}{c}
x(t) \\
y(t)
\end{array}
\right) =
\left( \begin{array}{cc}
a_{11} & a_{12} \\
a_{21} & a_{22}
\end{array}
\right)\cdot
\left( \begin{array}{c}
\cos t \\
\sin t
\end{array}
\right)
\end{eqnarray}
Consider a singular value decomposition
$$\left( \begin{array}{cc}
a_{11} & a_{12} \\
a_{21} & a_{22}
\end{array} \right ) = \left( \begin{array}{cc}
\cos u & -\sin u \\
\sin u & \cos u
\end{array} \right ) \cdot
\left( \begin{array}{cc}
d_1 & 0 \\
0 & d_2
\end{array} \right )
\cdot
\left( \begin{array}{cc}
\cos v & -\sin v \\
\sin v & \cos v
\end{array} \right )
$$
So we get
$$\left( \begin{array}{c}
x(t) \\
y(t)
\end{array}
\right) = \left( \begin{array}{cc}
\cos u & -\sin u \\
\sin u & \cos u
\end{array} \right ) \cdot \left( \begin{array}{c}
d_1 \cos (t+v) \\
d_2 \sin (t+v)
\end{array}
\right) $$
So this is a rotation by angle $u$ of a common ellipse with semi-axes $|d_1|$ and $|d_2|$. |
Prove that $J \cap R(1 - e) \neq \{0\}$ | Assume the intersection to be zero.
We have
$$R/J = (J+R(1-e))/J \cong R(1-e)/(J \cap R(1-e)) = R(1-e) \cong R/Re,$$
but $Re = I \cap J$ is not maximal. |
What is the relationship between the minimal polynomial of $T$ and the minimal polynomials of the component maps? | Let $T :V\oplus W\rightarrow V\oplus W$ be the linear transformation defined by
$T(v,w)=(T_1(v) ,T_2(w))$ for $(v,w) \in V\oplus W$, and let $h(x)$ be the minimal polynomial of $T$.
So minimal polynomial $h(x)$ of $T$ is the $lcm(g(x), f(x))$.
Given $f(x)=x^3+x^2+x+1$ and $g(x)=x^4-x^2-2$.
After computing the lcm you'll find $h(x)=x^5+x^4-x^3-x^2-2x-2$. Hence $deg(h(x))=5$. Also notice that then constant term of $h(x)$ is not zero, which represents the value of the determinant, hence $nullity(T)=0$
Note:
The minimal polynomial of $T$ is the l.c.m of minimal polynomials of $T_1$ and $T_2$. |
Using a power series to approximate $\int_0^{.3} \frac{x^2}{1+x^7}dx$ to six decimal places | Too long for comments.
This is a very interesting problem almost if you are coding; not knowing in advance how many terms have to be added for a given accuracy, at the level of each summation, you need an IF test and this is an expensive operation in terms of computer resources.
Mabing the problem mode general, consider that you want to compute
$$I=\int_0^t \frac {x^a}{1+x^b}\,dx \qquad \text{with}\qquad a\geq 0\qquad \text{and}\qquad b\geq 1$$ for an absolute error $\leq 10^{-k}$.
As you properly did, using the binomial expansion, we have
$$\frac {x^a}{1+x^b}=\sum_{n=0}^\infty (-1)^n x^{a+n b}$$ Writing the result as
$$I=\sum_{n=0}^p (-1)^n \frac{t^{a+b n+1}}{a+b n+1}+\sum_{n=p+1}^\infty (-1)^n \frac{t^{a+b n+1}}{a+b n+1}$$ since it is an alternatic series, we look for $p$ such that
$$R_p=\frac{t^{a+b (p+1)+1}}{a+b(p+1)+1} \leq 10^{-k}$$ There is an explicit solution which is
$$a+b (p+1)+1=-\frac{W\left(-10^k \log (t)\right)}{\log (t)}\implies p=\cdots$$ where $W(.)$ is Lambert function.
Applied to your problem $a=2$, $b=7$, $k=6$ and $t=\frac 3{10}$ , this gives
$$7p+10=\frac{W\left(10^6 \log \left(\frac{10}{3}\right)\right)}{\log
\left(\frac{10}{3}\right)}\approx 9.59664 \implies p=-0.058 \quad (!!)$$
So, as you properly showed, a single term should be sufficient. Effectively
$$R_1=\frac{\left(\frac{3}{10}\right)^{17}}{17}\approx 7.60 \times 10^{-11}\ll 10^{-6}$$
But, changing the problem to $k=60$ would give
$$7p+10=\frac{W\left(10^{60} \log \left(\frac{10}{3}\right)\right)}{\log
\left(\frac{10}{3}\right)}\approx 110.839 \implies p=14.4056 $$ So $\lceil p\rceil=15$. Ckecking
$$R_{14}=3.13\times 10^{-59} >10^{-60} \quad \text{and}\quad R_{15}=6.43\times 10^{-62}< 10^{-60}$$
In the linked page, you will find simple formulae for the approximation of $W(x)$ when $x$ is large. |
Why does the automorphism mapping $\omega$ to 1, not an element of Galois group | Automorphisms are always bijective, for any structure (the general definition being: a homomorphism for which an invrese homomorphism exists). Moreover we deal with fields here, that is rings with only two ideals: the trivial ideal and the full field. Thus even if we loosen the definition of automorphim, the only non-injective homomorphism is the trivial homomorphism that sends everything to $0$.
Also, $1$ is not a root of $X^6+X^5+X^4+X^3+X^2+1$. |
Does the set of all symmetries of a plane figure form a group under composition of functions? | Recall that a symmetry of a plane figure $P \subseteq \mathbb{R}^{2}$ is an isometry $f : \mathbb{R}^{2} \to \mathbb{R}^{2}$ whereby $f(P) = P$. Letting $d$ denote the Euclidean metric on $\mathbb{R}^{2}$, recall that a mapping $f : \mathbb{R}^{2} \to \mathbb{R}^{2}$ is an isometry if $$d(f(\vec{v}), f(\vec{w})) = d(\vec{v}, \vec{w})$$ for all $\vec{v}, \vec{w} \in \mathbb{R}^{2}$.
To prove that the set of all symmetries of a plane figure form a group under composition of functions, you need to use the above definition of a symmetry of a plane figure.
Letting $S = S_{P}$ denote the set of all symmetries of a plane figure $P$, let $f, g \in S$ be arbitrary. Since $f : \mathbb{R}^{2} \to \mathbb{R}^{2}$ and $g : \mathbb{R}^{2} \to \mathbb{R}^{2}$ are isometries, we have that $$d(fg(\vec{v}),fg(\vec{w}))=d(g(\vec{v}),g(\vec{w}))=d(\vec{v},\vec{w})$$ for all $\vec{v}, \vec{w} \in \mathbb{R}^{2}$, and we thus have that the composition $fg = f \circ g : \mathbb{R}^{2} \to \mathbb{R}^{2}$ is an isometry. Since $f$ and $g$ are symmetries of $P$, we have that $$fg(P) = f(P) = P$$ and we thus have that $fg$ is also a symmetry of $P$. Therefore, the composition operation $\circ$ is a binary operation on $S$.
Since the composition of functions is (in general) associative, we have that $\circ$ is an associative binary operation on $S$.
It is obvious that the identity mapping $\text{id} : \mathbb{R}^{2} \to \mathbb{R}^{2}$ on $\mathbb{R}^{2}$ is an isometry which fixes $P$, and it is obvious that $\text{id} \circ f = f \circ \text{id}$ for all $f \in S$.
So it remains to prove that the inverse axiom holds. To prove this, prove that each symmetry $f \in S$ is bijective and thus has an inverse $f^{-1}$, and then use the definition of the term symmetry given above to prove that $f^{-1} \in S$ for $f \in S$.
Reference: "How to Count: an Introduction to Combinatorics", R. Allenby and A. Slomson. |
Let $q$ be a prime of the form $4k+3$. If $2q+1$ is prime, then $2q+1$ divides $2^q−1$. | Since $p=2q+1$ is a prime of the form $8k+7$, it follows that $2$ is a quadratic residue of $p$. (Recall that $2$ is a quadratic residue of the odd prime $p$ iff $p$ is of the form $8k\pm 1$.)
Thus by Euler's Criterion, $2^{(p-1)/2}\equiv 1\pmod{p}$, and therefore $2^q\equiv 1\pmod{p}$. This says that $p$, that is, $2q+1$, divides $2^q-1$.
Note that $2q+1$ is a proper divisor of $2^q-1$ unless $2q+1=2^q-1$. That happens when $q=3$, but nowhere else, since for $q\gt 3$, $2q+1\lt 2^q-1$. |
Intuition on ds v. dA in Stokes type problems | When you parametrize a surface by $\mathbf r(u,v)$, the parallelogram spanned by $\dfrac{\partial\mathbf r}{\partial u}$ and $\dfrac{\partial\mathbf r}{\partial v}$ has area $$\left\|\dfrac{\partial\mathbf r}{\partial u}\times\dfrac{\partial\mathbf r}{\partial v}\right\|.$$
The cross product itself is a (non-normalized) normal vector and this area becomes the "fudge factor" (like the $r$ in $r\,dr\,d\theta$) that tells you how to relate area in the $uv$-plane to area on the surface. After that, it's just
\begin{align*}
\mathbf F\cdot\mathbf n\,dS &= \left(\mathbf F\cdot\frac{\frac{\partial\mathbf r}{\partial u}\times\frac{\partial\mathbf r}{\partial v}}{\left\|\frac{\partial\mathbf r}{\partial u}\times\frac{\partial\mathbf r}{\partial v}\right\|}\right)\left\|\frac{\partial\mathbf r}{\partial u}\times\frac{\partial\mathbf r}{\partial v}\right\|du\,dv \\
&=\mathbf F\cdot \left(\frac{\partial\mathbf r}{\partial u}\times\frac{\partial\mathbf r}{\partial v}\right) du\,dv.
\end{align*} |
Reducing products in modular arithmetic | @JyrkiLahtonen's comment led me to the right answer, and I'm just posting it here for the sake of having something to accept, since it went overnight without being posted.
From $$2^{20} \equiv 81 \cdot 81 \pmod{41}$$ we can use the rule stating that $$a\equiv b \pmod n \ \ \wedge \ \ c\equiv d \pmod n \ \ \Rightarrow \ \ ac \equiv bd \pmod n$$
In this case, we have $$81\equiv -1 \pmod{41} \ \ \wedge \ \ 81\equiv -1 \pmod{41} \ \ \Rightarrow \ \ 81\cdot81 \equiv (-1)(-1) \pmod{41}$$
with $a=c=81$ and $b=d=-1$ such that $$2^{20} \equiv 81\cdot 81 \equiv(-1)(-1) = 1 \pmod{41}$$ finishing the problem. |
The closure of the range of the operator of a symmetric nonnegative operator on a real separable Hilbert space $H$? | Suppose the eigenvalues are not bounded below, then we may find a sequence of eigenvalues $\lambda_n$ with $\lambda_n \le \frac{1}{n}$. If $e_n$ are the corresponding orthonormal eigenvectors, show that the sum $\sum_n \lambda_n e_n$ converges to some $y \in H$, and $y$ is not in the range of $Q$ but is in its closure. (Intuitively, if we had $Qx=y$ then we would have to have $x = \sum_n e_n$ but this sum does not converge.)
See 1.
No, $(\ker Q)^\perp$ is necessarily closed but we have just shown the range need not be. However, you can say that $(\ker Q)^\perp$ is the closure of the range of $Q$; this is easy to prove using the eigenvectors and eigenvalues of $Q$ (or directly, using the self-adjointness of $Q$). |
repeated exponents sign | I've seen this operation represented as
$$
\underset{i=1}{\overset{n}{\LARGE\mathrm E}}\;x_i
$$
Using this notation, your example would look something like this:
$$
\underset{i=1}{\overset{n}{\LARGE\mathrm E}}\;\frac{x}{2i}
$$
Note that this is has a different first base than your example ($x$ vs. $\frac{x}{2}$) to simplify typesetting. You can get a little more detail on generic exponentiation on OEIS.
Tetration is a special case that has the form
$$
\underset{i=1}{\overset{n}{\LARGE\mathrm E}}\;a
$$
You can get more details on tetration on Wikipedia. |
How to rewrite $\lim_{h \to 0}\frac{e^h - 1}{h}=1$ into $\lim_{n \to +\infty}\left(\frac{x}{n}+1\right)^n=e^x$ | Substituting $m=nx$ we have $n=\frac{m}{x}$ and for $x>0$ we get
$$\lim_{m \to +\infty}\left(1 + \frac{x}{m}\right)^{m}$$
$x=0$ is obvious, if $x<0$ put $y=-x$ and we get
$$\lim_{n \to +\infty} \frac{1}{(1 + \frac{1}{n})^{ny}}$$
and then it can be treated as before. |
Can we find $n$ satisfying this equation for certain $p$ | Since $p_p$ may be confusing, let me rename the integer $p$ as $k$.
From $p_{k+1}-p_k\ge2$ we get that $p_m-p_k\ge2(m-k)$. All the terms in the sum are non-positive. The only way in which it can be equal to $0$ is if all the terms are null. This happens in the following cases:
$k=2$, $n=4$
$k>2$, $n=k+2$ and $p_{k}$ is the smallest of a pair of twin primes.
$k=2$, $n=5$ |
Permutations with limited repetition | Hint: Given a valid string, let $(a_0, a_1, a_2)$ be the number of digits that appear 0, 1, and 2 times.
The conditions give us $ a_0 + a_1 + a_ 2 = 10, 0 \times a_0 + 1 \times a_ 1 + 2 \times a_2 = 7 $.
This means that we only have $ ( 4, 5, 1), (5, 3, 2), (6, 1, 3)$ as possible solutions.
How many valid strings of the form $(4, 5, 1)$ are there? |
Gauss's divergence integration | $\quad \vec{F} (x, y, z) =(x^2y, z − xy^2, z^2)$
$\nabla·\vec F=2xy-2xy+2z=2z$
$$\int_S\vec{F}.\hat nd\mathbf s=\int_V\nabla·\vec F\mathrm dv=\int_0^1\int_{-1}^1\int_{-\sqrt{1-y^2}}^{\sqrt{1-y^2}}2z\mathrm dx\mathrm dy\mathrm dz=$$
$$=\int_0^1\int_{-1}^1\left[2zx\right]_{x=-\sqrt{1-y^2}}^{x=\sqrt{1-y^2}}\mathrm dy\mathrm dz=\int_0^1\int_{-1}^12z\left(\sqrt{1-y^2}-\left(-\sqrt{1-y^2}\right)\right)\mathrm dy\mathrm dz$$
$$=\int_0^1\int_{-1}^1\left(4z\sqrt{1-y^2}\right)\mathrm dy\mathrm dz=\int_0^14z\left[y\sqrt{1-y^2}+\arcsin y\right]_{-1}^1\mathrm dz=$$
$$=\int_0^12\pi z\mathrm dz=2\pi[z^2/2]_0^1=\pi$$ |
Why is the set of well formed formulas are defined as the smallest set of strings | Consider first a simpler example:
The set of even numbers $E$ is the smallest $X\subseteq \mathbb{N}$ with the following two properties:
$0\in X$, and
if $n\in X$ then $n+2\in X$.
The two bulletpoints alone do not pin down $E$! For example, both $\mathbb{N}$ itself and $E\cup\{n\in\mathbb{N}: n\ge 17\}$ satisfy them. Basically, the minimality clause is required to make sure that no "unintended" elements enter the set we're defining.
Turning back to the example in the question, note that the set of all finite strings of symbols satisfies points $1$ and $2$ of your definition; it's only the minimality clause that tells us that that's not what we have in mind. See also here. |
Difficulty with factorizing determinant | Let us call $\Delta$ this determinant. Do the following operation: replace the last column by its difference with the second one ($C_3\leftarrow C_3-C_2$) to get
$$
\Delta=\begin{vmatrix} 1 & 1 &0\\a^2 & b^2 & c^2-b^2 \\(b+c)^2 & (c+a)^2 & (a+b)^2-(c+a)^2 \end{vmatrix}.
$$
Since $(a+b)^2-(c+a)^2=\left(b-c\right)(2a+b+c)$, we can use multilinearity of the determinant to get
$$
\Delta=(b-c)\begin{vmatrix} 1 & 1 &0\\a^2 & b^2 & -(b+c) \\(b+c)^2 & (c+a)^2 & 2a+b+c \end{vmatrix}.
$$
Then We do $C_2\leftarrow C_2-C_1$ to get
$$
\Delta=(b-c)(a-b)\begin{vmatrix} 1 & 0&0\\a^2 & -(a+b) & -(b+c) \\(a+c)^2 & a+b+2c & 2a+b+c \end{vmatrix}.
$$
Expanding with respect to the first line gives
$$
\Delta=(b-c)(a-b)\begin{vmatrix} -(a+b) & -(b+c) \\ a+b+2c & 2a+b+c \end{vmatrix}.
$$
Doing $L_2\leftarrow L_2+L_1$ gives
$$
\Delta=(b-c)(a-b)\begin{vmatrix} -(a+b) & -(b+c) \\ 2c & 2a \end{vmatrix}
=2(b-c)(b-a)\begin{vmatrix} a+b & b+c \\ c & a \end{vmatrix}.
$$
Finally, do $C_2\leftarrow C_2-C_1$ to get the wanted result. |
Variance of dot product of two normalized random vector | This can be treated in analogy to Expected value of inner product of uniformly distributed random unit vectors.
By rotational symmetry, we can assume one of the vectors to be a fixed unit vector, say, $\mathbf e_1$. The expected value of the dot product with this vector is $0$ by symmetry. Thus the variance is the expected value of the square. The sum of the squares of the $n$ components is $1$ due to normalization, so by symmetry the expected value for each component must be $\frac1N$. |
Arriving at odd possible solutions for functions | Continuing what you have written,
$f^2(x)=\frac{9x+5}{4}=2k+1$
$k=\frac{9x+1}{8}$, where k is any positive integer.
Note that $x=7$ satisfies this.
Further, let $x=7+y$ also gives integral $k$.
Then, $$k=\frac{9(7+y)+1}{8}$$ $$k=8+\frac{9y}{8}$$ $$\implies 8|y $$.
It gives $x=7,15,23,31...$ |
Is it useful to know that automorphisms on $(\mathbb R^{\gt0},+)$ are always continuous? | For this exposition we are launching off the theory of magnitudes platform and are assuming that we know nothing about $(\mathbb R^{\gt0},1,+)$ except that it satisfies $\text{P-0}$ thru $\text{P-5}$ and the theorem found here.
In this study of foundational logic, we've made a beeline drive to the automorphism group of $(\mathbb R^{\gt0},+)$ and assume we only have the following three theoretical concepts under our belts:
The natural numbers (an inductive set),
$\quad (\mathbb N,(+,0),(1,*)) = \{0,1,2,\dots,n,\dots\}$
Note that we've only named the first $3$ numbers. We haven't 'discovered' Euclidean division or have a way of representing integers with a selected base.
-The integers,
$\quad (\mathbb Z,(+,0),(1,*)) =\{\dots,-n,\dots-2,-1,0,1,2,\dots,n,\dots\}$
-The theory of finite sets
We know that $(\mathbb N^{\gt 0},+)$ can be regarded as morphically contained in $(\mathbb R^{\gt0},+)$, but we've applied a 'forgetful functor' to the real numbers, and from here we can't even talk about the the rational numbers - there is no multiplication!.
Let us analyze the dilation automorphism, morphism $1 \mapsto 2$ on $(\mathbb R^{\gt0},+)$. We represent it with a name, $\mu_2$. It is easy to show that
$\tag 1 \sum_{k \in F} \mu_2^{k} \text{ with } F \text{ a finite subset of } \mathbb Z$
is an automorphism.
Of course when we apply it to the number $1$, numbers in $(\mathbb R^{\gt0},+)$ 'light up' (they get defined).
$\tag 2 \sum_{k \in F} \mu_2^{k} (1) = \sum_{k \in F} 2^{k}$
Let $\mathcal F (\mathbb Z)$ be the set of all finite subsets of $\mathbb Z$.
The following can be proven from this rudimentary logic platform:
Theorem 1: The mapping $F \mapsto \sum_{k \in F} \mu_2^{k}$ is an injection into the automorphism group.
The automorphisms are determined by where they send $1$, so we can also state
Theorem 2: If $\sum_{k \in F} 2^{k} = \sum_{k \in G} 2^{k}$ then the two finite sets $F$ and $G$ are identical.
So we can represent many numbers in $(\mathbb R^{\gt0},+)$. With a little effort we can show that $\sum_{k \in F} 2^{k}$ can only represent a positive integer when $F$ contains no negative integers.
The integer $1$ get represented since it corresponds to the identify automorphism applied to $1$, $\mu_2^{0}(1)$.
Assume $n$ can be represented. Using the identity
$\tag 3 2^k + 2^k = 2^{k+1}$
together with algebraic logic, we know that $n + 1$ is also represented.
Theorem 3: Every positive integer has a unique representation
$\tag 4 \sum_{k \in F} 2^{k}$
where $F$ has no negative integers.
So, without the notion of multiplication we have a representation theorem for integers. |
Find two integer numbers given the LCM and the difference between them | Here is a brute force approach (involving only six cases to check total, so relatively short for brute force)
We first recognize that if $\text{lcm}(a,b)=3900$ then $3900$ is a multiple of both $a$ and $b$ implying that both $a$ and $b$ are factors of $3900$.
The factorization is $3900=2^2\cdot 3\cdot 5^2\cdot 13$
The $36$ factors of $3900$ are $\left\{\begin{array}{}1&2&3&4&5&6&\dots\\\dots&260&300&325&390&650\\780&975&1300&1950&3900\end{array}\right\}$
Assume without loss of generality that $a>b$. If we are looking for positive integers $a$ and $b$, then since $a-b=1080$ it must be that $a=b+1080\geq 1080$ and there are only three candidates for the value of $a$ which are larger than $1080$.
Unfortunately, $3900-1080=2820$ is not a factor of $3900$, $1950-1080=870$ is not a factor of $3900$, and $1300-1080=220$ is not a factor of $3900$, so there are no positive values of $a$ and $b$ that work.
If we allow for negative values of $a$ and $b$, then we can extend our search further. Without loss of generality, let $|a|>|b|$ for the remainder of our search.
With $|a|>|b|$ and $a-b=1080$, since $|a-b|=1080\leq |a|+|b|<2|a|$ we have that $|a|>540$ so this adds only three additional cases to check by hand.
Continuing the search, we get $975-1080=-105$ is not a factor of $3900$, and $650-1080=-430$ is not a factor of $3900$, however $780-1080=-300$ is in fact a factor of $3900$
Indeed, checking $\text{lcm}(780,-300)=3900$ as $780=2^2\cdot 3\cdot 5\cdot 13$ and $300=2^2\cdot 3\cdot 5^2$ |
How can I complete the square for a three variable expression | Yes $$x^2+(y-\frac{z}{2})^2+\frac{3}{4}z^2$$ |
How do I find the partial curved surface area on a hemisphere? | You don't say how your region is specified; the precise form may affect the suitability of this answer.
Radial projection away from a diameter to a circumscribed cylinder of radius $r$ is area-preserving (!) by Archimedes' theorem. If you have an analytic description of the region in cylindrical coordinates, something of the type $f(\theta) \leq z \leq g(\theta)$ for $a \leq \theta \leq b$, the area is
$$
r\int_{a}^{b} \left|g(\theta) - f(\theta)\right|\, d\theta.
$$ |
Proving that a set is closed | you have to notice that every compact set in an hausdorff space is closed. Since $\mathbb{R}$ is a hausdorff space, it follows, that $A$ is compact and so the product $A \times B \subset \mathbb{R}^{n+1}$ is also compact |
Name for this matrix operation? | I am not sure what that matrix operator is called, but it is very close to being a Kronecker product (i.e., $A\otimes B$ ) so maybe there is something similar to that? In either case here is some fast code for getting it out of R using Kronecker products as well.
> kronecker(A,B)
[,1] [,2] [,3] [,4] [,5] [,6]
[1,] 5 6 7 10 12 14
[2,] 8 9 10 16 18 20
[3,] 15 18 21 20 24 28
[4,] 24 27 30 32 36 40
> kronecker(A,B)[c(1,4),]
[,1] [,2] [,3] [,4] [,5] [,6]
[1,] 5 6 7 10 12 14
[2,] 24 27 30 32 36 40 |
Show that the supremum of a sequence of continuous functions is continuous | Fix $T>0$. Since $(f_n|_{[0,T]})_{n \in \mathbb{N}}$ is a Cauchy sequence in $C([0,T],E)$, it follows from the the Arzela-Ascoli theorem that $(f_n|_{[0,T]})_{n \in \mathbb{N}}$ is equicontinuous.
Now fix $t \in [0,T)$. As $h$ is clearly non-decreasing, we have
$$h(t) \leq h(t+s) \qquad \text{for all $s \geq 0$}. \tag{1} $$
On the other hand,
$$\begin{align*} h(t+s) &= \max \left\{ \sup_{n \in \mathbb{N}} \sup_{r \leq t} |f_n(r)|, \sup_{n \in \mathbb{N}} \sup_{t \leq r \leq t+s} |f_n(r)| \right\} \\ &\leq \max \bigg\{ h(t), \sup_{n \in \mathbb{N}} \sup_{t \leq r \leq t+s} |f_n(r)-f_n(t)| + \underbrace{\sup_{n \in \mathbb{N}}|f_n(t)|}_{\leq h(t)} \bigg\} \\ &\leq h(t) + \sup_{n \in \mathbb{N}} \sup_{t \leq r \leq t+s} |f_n(r)-f_n(t)| \end{align*}$$
Because of the equicontinuity of $(f_n|_{[0,T]})_{n \in \mathbb{N}}$, the second term on the right-hand side gets small if we choose $s>0$ sufficiently small. Combining this estimate with $(1)$, we conclude that $h$ is right-continuous at $t$. The proof of the left-continuity goes exactly along the same lines; I leave it to you. |
Formal logic proof of absolute value formula | Hint: What happens if x = 0? Does that say anything about z? |
What is the fundamental matrix solution? | Assuming you have three unique eigenvalues $(\lambda_1,\lambda_2,\lambda_3)$ and eigenvectors $(\pmb{\xi}^{(1)},\pmb{\xi}^{(2)},\pmb{\xi}^{(3)})$ you should have three linearly independent solutions in the form
$$\mathbf{x}_1(t)=\pmb{\xi}^{(1)} e^{\lambda_1t}\qquad
\mathbf{x}_2(t)=\pmb{\xi}^{(2)} e^{\lambda_2t}\qquad
\mathbf{x}_3(t)=\pmb{\xi}^{(3)} e^{\lambda_3t}$$
Then your fundamental matrix should be
$$\pmb{\psi}(t)=\left(
\begin{array}{@{}ccc@{}}
\mathbf{x}_1(t)&
\mathbf{x}_2(t)&
\mathbf{x}_3(t)
\end{array}\right)=\left(
\begin{array}{@{}ccc@{}}
\xi_1^{(1)e^{\lambda_1t}}&\xi_1^{(2)e^{\lambda_2t}}&\xi_1^{(3)e^{\lambda_3t}}\\
\xi_2^{(1)e^{\lambda_1t}}&\xi_2^{(2)e^{\lambda_2t}}&\xi_2^{(3)e^{\lambda_3t}}\\
\xi_3^{(1)e^{\lambda_1t}}&\xi_3^{(2)e^{\lambda_2t}}&\xi_3^{(3)e^{\lambda_3t}}\\
\end{array}
\right)
$$
where $\xi_n^{(m)}$ denotes the $n$th element of $\pmb{\xi}^{(m)}$. Note that the general solution of the differential equation is
$$\mathbf{x}=\pmb{\psi}(t)\mathbf{c}$$
where $\mathbf{c}=(c_1,c_2,c_3)^\intercal$ is a constant matrix. |
Stochastic Taylor Expansion of Ito Integral | In order to show that
$$I(t) := \int_0^t \sigma_s \, dW_s$$
satisfies
$$I(t) = \sigma_0 (W_t-W_0) + O_p(\sqrt{t}) \quad \text{as $t \to 0$} \tag{1}$$
we have to prove that
$$\frac{1}{\sqrt{t}} \bigg[ I(t) - \sigma_0 (W_t-W_0) \bigg]$$
is bounded in probability, i.e.
$$\forall \epsilon>0 \, \, \exists \delta>0, M>0 \, \, \forall t \in [0,\delta]: \quad \mathbb{P} \left( \left| \frac{1}{\sqrt{t}} \bigg[ I(t) - \sigma_0 (W_t-W_0) \bigg]\right| > M \right) \leq \epsilon. \tag{2}$$
Clearly,
$$I(t)-\sigma_0(W_t-W_0) = \int_0^t (\sigma_s-\sigma_0) \,dW_s.$$
Applying the Markov inequality and using Itô's isometry, we find
$$\begin{align*} \mathbb{P} \left( \left| \frac{1}{\sqrt{t}} \bigg[ I(t) - \sigma_0 (W_t-W_0) \bigg]\right| > M \right) &\leq \frac{1}{M^2 t} \mathbb{E} \left( \left| \int_0^t (\sigma_s-\sigma_0) \, dW_s \right|^2 \right) \\ &= \frac{1}{M^2 t} \mathbb{E} \left( \int_0^t (\sigma_s-\sigma_0)^2 \,ds \right). \end{align*}$$
Since $\sigma$ is, by assumption, a bounded process, this implies that there exists a finite constant $K>0$ such that
$$\mathbb{P} \left( \left| \frac{1}{\sqrt{t}} \bigg[ I(t) - \sigma_0 (W_t-W_0) \bigg]\right| > M \right) \leq \frac{K}{M^2}$$ for all $t \in [0,1]$. Choosing $M>0$ sufficiently large we get $(2)$. |
Define square root function over the complex numbers. | For any path between $2$ and $z$ that doesn't intersect the real line between $+1$ and $-1$, define
$$
f(z)=\frac{\log(3)}{2}+\int_2^z\frac{\zeta\,\mathrm{d}\zeta}{\zeta^2-1}
$$
Since $f'(z)=\frac{z}{z^2-1}=\frac{1/2}{z-1}+\frac{1/2}{z+1}$, after accounting for the constant of integration at $z=2$, we get that $f(z)=\frac12\log(z^2-1)$ .
Since the sum of the residues of $\frac{\zeta}{\zeta^2-1}=\frac{1/2}{\zeta-1}+\frac{1/2}{\zeta+1}$ at $\zeta=+1$ and $\zeta=-1$ is $1$, the difference of the integral between two different paths that don't intersect the real line between $+1$ and $-1$ must be an integral multiple of $2\pi i$. Thus, $e^f$ is the same over both paths.
Therefore, $e^{f(z)}=\sqrt{z^2-1}$ is well-defined independent of the path taken. |
Computing adjoint operator in $(\ell_2,\|.\|_2)$ | \begin{align*}
\left<T^{\ast}e_{n},e_{m}\right>&=\left<e_{n},Te_{m}\right>\\
&=\left<e_{n},(0,...,1/2^{m},...)\right>\\
&=\dfrac{1}{2^{m}}\delta_{nm}.
\end{align*}
\begin{align*}
\|T(x_{n})\|_{2}&=\left(\sum_{n=1}^{\infty}\left(\dfrac{x_{n}+x_{n+1}}{2^{n}}\right)^{2}\right)^{1/2}\\
&\leq\left(\sum_{n=1}^{\infty}\left(\dfrac{x_{n}}{2^{n}}+\dfrac{x_{n+1}}{2^{n}}\right)^{2}\right)^{1/2}\\
&\leq\left(\sum_{n=1}^{\infty}\left(\dfrac{x_{n}}{2^{n}}\right)^{2}\right)^{1/2}+\left(\sum_{n=1}^{\infty}\left(\dfrac{x_{n+1}}{2^{n}}\right)^{2}\right)^{1/2}\\
&\leq\dfrac{1}{2}\|(x_{n})\|_{2}+\dfrac{1}{2}\|(x_{n})\|_{2}\\
&=\|(x_{n})\|_{2}.
\end{align*} |
Boundedness and closedness of the set | Closedness: let $(f_n) \subset S$ be a sequence converging in $L^2$ to $f \in L^2([0,1])$. Then, up to extracting a subsequence, we may assume that $f_n \to f$ a.e. Now, by Fatou's lemma, we have
$$\int_0^1 \frac{|f(x)|}{x} \, dx \leqslant \liminf_n\int_0^1 \frac{|f_n(x)|}{x}\, dx \leqslant 1.$$
Thus $S$ is closed.
Boundedness: $S$ is not bounded. For example, take $f_n(x) = Cn\mathbf{1}_{(1-1/n,1]}(x)$ for some constant $C>0$. Then
$$\int_0^1 \frac{|f_n(x)|}{x}\, dx = Cn\int_{1-1/n}^1\frac{dx}{x}= -Cn\log(1-1/n).$$
Since $-n\log(1-1/n) \to 1$, we may choose $C>0$ such that $-Cn\log(1-1/n) \leqslant 1$ for every $n$. For this choice of $C$, $f_n \in S$. However,
$$\int_0^1 |f_n(x)|^2\, dx = C^2n^2 \int_{1-1/n}^{1}dx=C^2n, $$
and $(f_n)$ is not bounded in $L^2$. |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.